00:00:00.000 Started by upstream project "autotest-per-patch" build number 132330 00:00:00.000 originally caused by: 00:00:00.000 Started by upstream project "jbp-per-patch" build number 25766 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.030 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.030 The recommended git tool is: git 00:00:00.031 using credential 00000000-0000-0000-0000-000000000002 00:00:00.032 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.044 Fetching changes from the remote Git repository 00:00:00.049 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.065 Using shallow fetch with depth 1 00:00:00.065 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.065 > git --version # timeout=10 00:00:00.085 > git --version # 'git version 2.39.2' 00:00:00.085 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.111 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.111 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/changes/84/24384/13 # timeout=5 00:00:04.593 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.603 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.613 Checking out Revision 6d4840695fb479ead742a39eb3a563a20cd15407 (FETCH_HEAD) 00:00:04.614 > git config core.sparsecheckout # timeout=10 00:00:04.623 > git read-tree -mu HEAD # timeout=10 00:00:04.636 > git checkout -f 6d4840695fb479ead742a39eb3a563a20cd15407 # timeout=5 00:00:04.654 Commit message: "jenkins/jjb-config: Commonize distro-based params" 00:00:04.654 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.745 [Pipeline] Start of Pipeline 00:00:04.758 [Pipeline] library 00:00:04.759 Loading library shm_lib@master 00:00:04.759 Library shm_lib@master is cached. Copying from home. 00:00:04.776 [Pipeline] node 00:00:04.787 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest 00:00:04.789 [Pipeline] { 00:00:04.799 [Pipeline] catchError 00:00:04.801 [Pipeline] { 00:00:04.814 [Pipeline] wrap 00:00:04.822 [Pipeline] { 00:00:04.830 [Pipeline] stage 00:00:04.832 [Pipeline] { (Prologue) 00:00:04.850 [Pipeline] echo 00:00:04.852 Node: VM-host-WFP7 00:00:04.858 [Pipeline] cleanWs 00:00:04.871 [WS-CLEANUP] Deleting project workspace... 00:00:04.871 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.879 [WS-CLEANUP] done 00:00:05.082 [Pipeline] setCustomBuildProperty 00:00:05.171 [Pipeline] httpRequest 00:00:05.648 [Pipeline] echo 00:00:05.650 Sorcerer 10.211.164.20 is alive 00:00:05.660 [Pipeline] retry 00:00:05.662 [Pipeline] { 00:00:05.676 [Pipeline] httpRequest 00:00:05.680 HttpMethod: GET 00:00:05.681 URL: http://10.211.164.20/packages/jbp_6d4840695fb479ead742a39eb3a563a20cd15407.tar.gz 00:00:05.681 Sending request to url: http://10.211.164.20/packages/jbp_6d4840695fb479ead742a39eb3a563a20cd15407.tar.gz 00:00:05.683 Response Code: HTTP/1.1 200 OK 00:00:05.683 Success: Status code 200 is in the accepted range: 200,404 00:00:05.684 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_6d4840695fb479ead742a39eb3a563a20cd15407.tar.gz 00:00:05.993 [Pipeline] } 00:00:06.032 [Pipeline] // retry 00:00:06.038 [Pipeline] sh 00:00:06.316 + tar --no-same-owner -xf jbp_6d4840695fb479ead742a39eb3a563a20cd15407.tar.gz 00:00:06.332 [Pipeline] httpRequest 00:00:07.239 [Pipeline] echo 00:00:07.241 Sorcerer 10.211.164.20 is alive 00:00:07.248 [Pipeline] retry 00:00:07.249 [Pipeline] { 00:00:07.260 [Pipeline] httpRequest 00:00:07.264 HttpMethod: GET 00:00:07.264 URL: http://10.211.164.20/packages/spdk_dcc2ca8f30ea717d7f66cc9c92d44faa802d2c19.tar.gz 00:00:07.264 Sending request to url: http://10.211.164.20/packages/spdk_dcc2ca8f30ea717d7f66cc9c92d44faa802d2c19.tar.gz 00:00:07.274 Response Code: HTTP/1.1 200 OK 00:00:07.275 Success: Status code 200 is in the accepted range: 200,404 00:00:07.275 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_dcc2ca8f30ea717d7f66cc9c92d44faa802d2c19.tar.gz 00:00:41.357 [Pipeline] } 00:00:41.379 [Pipeline] // retry 00:00:41.388 [Pipeline] sh 00:00:41.676 + tar --no-same-owner -xf spdk_dcc2ca8f30ea717d7f66cc9c92d44faa802d2c19.tar.gz 00:00:44.981 [Pipeline] sh 00:00:45.261 + git -C spdk log --oneline -n5 00:00:45.261 dcc2ca8f3 bdev: fix per_channel data null when bdev_get_iostat with reset option 00:00:45.262 73f18e890 lib/reduce: fix the magic number of empty mapping detection. 00:00:45.262 029355612 bdev_ut: add manual examine bdev unit test case 00:00:45.262 fc96810c2 bdev: remove bdev from examine allow list on unregister 00:00:45.262 a0c128549 bdev/nvme: Make bdev nvme get and set opts APIs public 00:00:45.283 [Pipeline] writeFile 00:00:45.300 [Pipeline] sh 00:00:45.585 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:45.596 [Pipeline] sh 00:00:45.877 + cat autorun-spdk.conf 00:00:45.877 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:45.877 SPDK_RUN_ASAN=1 00:00:45.877 SPDK_RUN_UBSAN=1 00:00:45.877 SPDK_TEST_RAID=1 00:00:45.877 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:45.884 RUN_NIGHTLY=0 00:00:45.886 [Pipeline] } 00:00:45.900 [Pipeline] // stage 00:00:45.915 [Pipeline] stage 00:00:45.918 [Pipeline] { (Run VM) 00:00:45.930 [Pipeline] sh 00:00:46.209 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:46.209 + echo 'Start stage prepare_nvme.sh' 00:00:46.209 Start stage prepare_nvme.sh 00:00:46.209 + [[ -n 1 ]] 00:00:46.209 + disk_prefix=ex1 00:00:46.209 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:00:46.209 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:00:46.209 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:00:46.209 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:46.209 ++ SPDK_RUN_ASAN=1 00:00:46.209 ++ SPDK_RUN_UBSAN=1 00:00:46.210 ++ SPDK_TEST_RAID=1 00:00:46.210 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:46.210 ++ RUN_NIGHTLY=0 00:00:46.210 + cd /var/jenkins/workspace/raid-vg-autotest 00:00:46.210 + nvme_files=() 00:00:46.210 + declare -A nvme_files 00:00:46.210 + backend_dir=/var/lib/libvirt/images/backends 00:00:46.210 + nvme_files['nvme.img']=5G 00:00:46.210 + nvme_files['nvme-cmb.img']=5G 00:00:46.210 + nvme_files['nvme-multi0.img']=4G 00:00:46.210 + nvme_files['nvme-multi1.img']=4G 00:00:46.210 + nvme_files['nvme-multi2.img']=4G 00:00:46.210 + nvme_files['nvme-openstack.img']=8G 00:00:46.210 + nvme_files['nvme-zns.img']=5G 00:00:46.210 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:46.210 + (( SPDK_TEST_FTL == 1 )) 00:00:46.210 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:46.210 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:46.210 + for nvme in "${!nvme_files[@]}" 00:00:46.210 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:00:46.210 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:46.210 + for nvme in "${!nvme_files[@]}" 00:00:46.210 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:00:46.210 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:46.210 + for nvme in "${!nvme_files[@]}" 00:00:46.210 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:00:46.210 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:46.210 + for nvme in "${!nvme_files[@]}" 00:00:46.210 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:00:46.210 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:46.210 + for nvme in "${!nvme_files[@]}" 00:00:46.210 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:00:46.210 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:46.210 + for nvme in "${!nvme_files[@]}" 00:00:46.210 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:00:46.210 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:46.210 + for nvme in "${!nvme_files[@]}" 00:00:46.210 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:00:46.210 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:46.468 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:00:46.468 + echo 'End stage prepare_nvme.sh' 00:00:46.468 End stage prepare_nvme.sh 00:00:46.481 [Pipeline] sh 00:00:46.815 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:46.815 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -H -a -v -f fedora39 00:00:46.815 00:00:46.815 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:00:46.815 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:00:46.815 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:00:46.815 HELP=0 00:00:46.815 DRY_RUN=0 00:00:46.815 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img, 00:00:46.815 NVME_DISKS_TYPE=nvme,nvme, 00:00:46.815 NVME_AUTO_CREATE=0 00:00:46.815 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img, 00:00:46.815 NVME_CMB=,, 00:00:46.815 NVME_PMR=,, 00:00:46.815 NVME_ZNS=,, 00:00:46.815 NVME_MS=,, 00:00:46.815 NVME_FDP=,, 00:00:46.815 SPDK_VAGRANT_DISTRO=fedora39 00:00:46.815 SPDK_VAGRANT_VMCPU=10 00:00:46.815 SPDK_VAGRANT_VMRAM=12288 00:00:46.815 SPDK_VAGRANT_PROVIDER=libvirt 00:00:46.815 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:46.815 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:46.815 SPDK_OPENSTACK_NETWORK=0 00:00:46.815 VAGRANT_PACKAGE_BOX=0 00:00:46.815 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:46.815 FORCE_DISTRO=true 00:00:46.815 VAGRANT_BOX_VERSION= 00:00:46.815 EXTRA_VAGRANTFILES= 00:00:46.815 NIC_MODEL=virtio 00:00:46.815 00:00:46.816 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:00:46.816 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:00:48.720 Bringing machine 'default' up with 'libvirt' provider... 00:00:49.290 ==> default: Creating image (snapshot of base box volume). 00:00:49.290 ==> default: Creating domain with the following settings... 00:00:49.290 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732017172_48d4e7eee2bf07b42bda 00:00:49.290 ==> default: -- Domain type: kvm 00:00:49.290 ==> default: -- Cpus: 10 00:00:49.290 ==> default: -- Feature: acpi 00:00:49.290 ==> default: -- Feature: apic 00:00:49.290 ==> default: -- Feature: pae 00:00:49.290 ==> default: -- Memory: 12288M 00:00:49.290 ==> default: -- Memory Backing: hugepages: 00:00:49.290 ==> default: -- Management MAC: 00:00:49.290 ==> default: -- Loader: 00:00:49.290 ==> default: -- Nvram: 00:00:49.290 ==> default: -- Base box: spdk/fedora39 00:00:49.290 ==> default: -- Storage pool: default 00:00:49.290 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732017172_48d4e7eee2bf07b42bda.img (20G) 00:00:49.290 ==> default: -- Volume Cache: default 00:00:49.290 ==> default: -- Kernel: 00:00:49.290 ==> default: -- Initrd: 00:00:49.290 ==> default: -- Graphics Type: vnc 00:00:49.290 ==> default: -- Graphics Port: -1 00:00:49.290 ==> default: -- Graphics IP: 127.0.0.1 00:00:49.290 ==> default: -- Graphics Password: Not defined 00:00:49.290 ==> default: -- Video Type: cirrus 00:00:49.290 ==> default: -- Video VRAM: 9216 00:00:49.290 ==> default: -- Sound Type: 00:00:49.290 ==> default: -- Keymap: en-us 00:00:49.290 ==> default: -- TPM Path: 00:00:49.290 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:49.290 ==> default: -- Command line args: 00:00:49.290 ==> default: -> value=-device, 00:00:49.290 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:49.290 ==> default: -> value=-drive, 00:00:49.290 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-0-drive0, 00:00:49.290 ==> default: -> value=-device, 00:00:49.290 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:49.290 ==> default: -> value=-device, 00:00:49.290 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:49.290 ==> default: -> value=-drive, 00:00:49.290 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:49.290 ==> default: -> value=-device, 00:00:49.290 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:49.290 ==> default: -> value=-drive, 00:00:49.290 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:49.290 ==> default: -> value=-device, 00:00:49.290 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:49.290 ==> default: -> value=-drive, 00:00:49.290 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:49.290 ==> default: -> value=-device, 00:00:49.290 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:49.290 ==> default: Creating shared folders metadata... 00:00:49.550 ==> default: Starting domain. 00:00:50.930 ==> default: Waiting for domain to get an IP address... 00:01:09.027 ==> default: Waiting for SSH to become available... 00:01:09.027 ==> default: Configuring and enabling network interfaces... 00:01:14.306 default: SSH address: 192.168.121.124:22 00:01:14.306 default: SSH username: vagrant 00:01:14.306 default: SSH auth method: private key 00:01:16.873 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:26.863 ==> default: Mounting SSHFS shared folder... 00:01:27.803 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:27.803 ==> default: Checking Mount.. 00:01:29.185 ==> default: Folder Successfully Mounted! 00:01:29.185 ==> default: Running provisioner: file... 00:01:30.569 default: ~/.gitconfig => .gitconfig 00:01:31.138 00:01:31.138 SUCCESS! 00:01:31.138 00:01:31.138 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:31.138 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:31.138 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:31.138 00:01:31.148 [Pipeline] } 00:01:31.164 [Pipeline] // stage 00:01:31.174 [Pipeline] dir 00:01:31.174 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:01:31.176 [Pipeline] { 00:01:31.190 [Pipeline] catchError 00:01:31.192 [Pipeline] { 00:01:31.205 [Pipeline] sh 00:01:31.487 + vagrant ssh-config --host vagrant 00:01:31.488 + sed -ne /^Host/,$p 00:01:31.488 + tee ssh_conf 00:01:34.048 Host vagrant 00:01:34.048 HostName 192.168.121.124 00:01:34.048 User vagrant 00:01:34.048 Port 22 00:01:34.048 UserKnownHostsFile /dev/null 00:01:34.048 StrictHostKeyChecking no 00:01:34.048 PasswordAuthentication no 00:01:34.048 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:34.048 IdentitiesOnly yes 00:01:34.048 LogLevel FATAL 00:01:34.048 ForwardAgent yes 00:01:34.048 ForwardX11 yes 00:01:34.048 00:01:34.063 [Pipeline] withEnv 00:01:34.066 [Pipeline] { 00:01:34.079 [Pipeline] sh 00:01:34.362 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:34.362 source /etc/os-release 00:01:34.362 [[ -e /image.version ]] && img=$(< /image.version) 00:01:34.362 # Minimal, systemd-like check. 00:01:34.362 if [[ -e /.dockerenv ]]; then 00:01:34.362 # Clear garbage from the node's name: 00:01:34.362 # agt-er_autotest_547-896 -> autotest_547-896 00:01:34.362 # $HOSTNAME is the actual container id 00:01:34.362 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:34.362 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:34.362 # We can assume this is a mount from a host where container is running, 00:01:34.362 # so fetch its hostname to easily identify the target swarm worker. 00:01:34.362 container="$(< /etc/hostname) ($agent)" 00:01:34.362 else 00:01:34.362 # Fallback 00:01:34.362 container=$agent 00:01:34.362 fi 00:01:34.362 fi 00:01:34.362 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:34.362 00:01:34.634 [Pipeline] } 00:01:34.651 [Pipeline] // withEnv 00:01:34.659 [Pipeline] setCustomBuildProperty 00:01:34.673 [Pipeline] stage 00:01:34.676 [Pipeline] { (Tests) 00:01:34.693 [Pipeline] sh 00:01:34.976 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:35.249 [Pipeline] sh 00:01:35.536 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:35.813 [Pipeline] timeout 00:01:35.813 Timeout set to expire in 1 hr 30 min 00:01:35.815 [Pipeline] { 00:01:35.829 [Pipeline] sh 00:01:36.149 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:36.717 HEAD is now at dcc2ca8f3 bdev: fix per_channel data null when bdev_get_iostat with reset option 00:01:36.730 [Pipeline] sh 00:01:37.010 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:37.283 [Pipeline] sh 00:01:37.568 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:37.845 [Pipeline] sh 00:01:38.129 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:01:38.389 ++ readlink -f spdk_repo 00:01:38.389 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:38.389 + [[ -n /home/vagrant/spdk_repo ]] 00:01:38.389 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:38.389 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:38.389 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:38.389 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:38.389 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:38.389 + [[ raid-vg-autotest == pkgdep-* ]] 00:01:38.389 + cd /home/vagrant/spdk_repo 00:01:38.389 + source /etc/os-release 00:01:38.389 ++ NAME='Fedora Linux' 00:01:38.389 ++ VERSION='39 (Cloud Edition)' 00:01:38.389 ++ ID=fedora 00:01:38.389 ++ VERSION_ID=39 00:01:38.389 ++ VERSION_CODENAME= 00:01:38.389 ++ PLATFORM_ID=platform:f39 00:01:38.389 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:38.389 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:38.389 ++ LOGO=fedora-logo-icon 00:01:38.389 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:38.389 ++ HOME_URL=https://fedoraproject.org/ 00:01:38.389 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:38.389 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:38.389 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:38.389 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:38.389 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:38.389 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:38.389 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:38.389 ++ SUPPORT_END=2024-11-12 00:01:38.389 ++ VARIANT='Cloud Edition' 00:01:38.389 ++ VARIANT_ID=cloud 00:01:38.389 + uname -a 00:01:38.389 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:38.389 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:38.959 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:38.959 Hugepages 00:01:38.959 node hugesize free / total 00:01:38.959 node0 1048576kB 0 / 0 00:01:38.959 node0 2048kB 0 / 0 00:01:38.959 00:01:38.959 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:38.959 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:38.959 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:38.959 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:38.959 + rm -f /tmp/spdk-ld-path 00:01:38.959 + source autorun-spdk.conf 00:01:38.959 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:38.959 ++ SPDK_RUN_ASAN=1 00:01:38.959 ++ SPDK_RUN_UBSAN=1 00:01:38.959 ++ SPDK_TEST_RAID=1 00:01:38.959 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:38.959 ++ RUN_NIGHTLY=0 00:01:38.959 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:38.959 + [[ -n '' ]] 00:01:38.959 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:38.959 + for M in /var/spdk/build-*-manifest.txt 00:01:38.959 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:38.959 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:39.218 + for M in /var/spdk/build-*-manifest.txt 00:01:39.218 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:39.218 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:39.218 + for M in /var/spdk/build-*-manifest.txt 00:01:39.218 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:39.219 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:39.219 ++ uname 00:01:39.219 + [[ Linux == \L\i\n\u\x ]] 00:01:39.219 + sudo dmesg -T 00:01:39.219 + sudo dmesg --clear 00:01:39.219 + dmesg_pid=5435 00:01:39.219 + [[ Fedora Linux == FreeBSD ]] 00:01:39.219 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:39.219 + sudo dmesg -Tw 00:01:39.219 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:39.219 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:39.219 + [[ -x /usr/src/fio-static/fio ]] 00:01:39.219 + export FIO_BIN=/usr/src/fio-static/fio 00:01:39.219 + FIO_BIN=/usr/src/fio-static/fio 00:01:39.219 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:39.219 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:39.219 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:39.219 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:39.219 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:39.219 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:39.219 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:39.219 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:39.219 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:39.219 11:53:42 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:39.219 11:53:42 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:39.219 11:53:42 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:39.219 11:53:42 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:01:39.219 11:53:42 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:01:39.219 11:53:42 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:01:39.219 11:53:42 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:39.219 11:53:42 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:01:39.219 11:53:42 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:39.219 11:53:42 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:39.478 11:53:42 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:39.478 11:53:42 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:39.478 11:53:42 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:39.478 11:53:42 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:39.478 11:53:42 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:39.478 11:53:42 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:39.478 11:53:42 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:39.478 11:53:42 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:39.478 11:53:42 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:39.479 11:53:42 -- paths/export.sh@5 -- $ export PATH 00:01:39.479 11:53:42 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:39.479 11:53:42 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:39.479 11:53:42 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:39.479 11:53:42 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1732017222.XXXXXX 00:01:39.479 11:53:42 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1732017222.fKNSMS 00:01:39.479 11:53:42 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:39.479 11:53:42 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:39.479 11:53:42 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:39.479 11:53:42 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:39.479 11:53:42 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:39.479 11:53:42 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:39.479 11:53:42 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:39.479 11:53:42 -- common/autotest_common.sh@10 -- $ set +x 00:01:39.479 11:53:42 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:01:39.479 11:53:42 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:39.479 11:53:42 -- pm/common@17 -- $ local monitor 00:01:39.479 11:53:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:39.479 11:53:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:39.479 11:53:42 -- pm/common@21 -- $ date +%s 00:01:39.479 11:53:42 -- pm/common@25 -- $ sleep 1 00:01:39.479 11:53:42 -- pm/common@21 -- $ date +%s 00:01:39.479 11:53:42 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732017222 00:01:39.479 11:53:42 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732017222 00:01:39.479 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732017222_collect-cpu-load.pm.log 00:01:39.479 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732017222_collect-vmstat.pm.log 00:01:40.420 11:53:43 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:40.420 11:53:43 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:40.420 11:53:43 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:40.420 11:53:43 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:40.420 11:53:43 -- spdk/autobuild.sh@16 -- $ date -u 00:01:40.420 Tue Nov 19 11:53:43 AM UTC 2024 00:01:40.420 11:53:43 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:40.420 v25.01-pre-197-gdcc2ca8f3 00:01:40.420 11:53:43 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:40.420 11:53:43 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:40.420 11:53:43 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:40.420 11:53:43 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:40.420 11:53:43 -- common/autotest_common.sh@10 -- $ set +x 00:01:40.420 ************************************ 00:01:40.420 START TEST asan 00:01:40.420 ************************************ 00:01:40.420 using asan 00:01:40.420 11:53:43 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:40.420 00:01:40.420 real 0m0.001s 00:01:40.420 user 0m0.000s 00:01:40.420 sys 0m0.000s 00:01:40.420 11:53:43 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:40.420 11:53:43 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:40.420 ************************************ 00:01:40.420 END TEST asan 00:01:40.420 ************************************ 00:01:40.681 11:53:43 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:40.681 11:53:43 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:40.681 11:53:43 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:40.681 11:53:43 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:40.681 11:53:43 -- common/autotest_common.sh@10 -- $ set +x 00:01:40.681 ************************************ 00:01:40.681 START TEST ubsan 00:01:40.681 ************************************ 00:01:40.681 using ubsan 00:01:40.681 11:53:43 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:40.681 00:01:40.681 real 0m0.000s 00:01:40.681 user 0m0.000s 00:01:40.681 sys 0m0.000s 00:01:40.681 11:53:43 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:40.681 11:53:43 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:40.681 ************************************ 00:01:40.681 END TEST ubsan 00:01:40.681 ************************************ 00:01:40.681 11:53:43 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:40.681 11:53:43 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:40.681 11:53:43 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:40.681 11:53:43 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:40.681 11:53:43 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:40.681 11:53:43 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:40.681 11:53:43 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:40.681 11:53:43 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:40.681 11:53:43 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:01:40.681 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:40.681 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:41.251 Using 'verbs' RDMA provider 00:01:57.113 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:15.212 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:15.212 Creating mk/config.mk...done. 00:02:15.212 Creating mk/cc.flags.mk...done. 00:02:15.212 Type 'make' to build. 00:02:15.212 11:54:16 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:15.212 11:54:16 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:15.212 11:54:16 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:15.212 11:54:16 -- common/autotest_common.sh@10 -- $ set +x 00:02:15.212 ************************************ 00:02:15.212 START TEST make 00:02:15.212 ************************************ 00:02:15.212 11:54:16 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:15.212 make[1]: Nothing to be done for 'all'. 00:02:25.191 The Meson build system 00:02:25.191 Version: 1.5.0 00:02:25.191 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:25.191 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:25.191 Build type: native build 00:02:25.191 Program cat found: YES (/usr/bin/cat) 00:02:25.191 Project name: DPDK 00:02:25.191 Project version: 24.03.0 00:02:25.191 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:25.191 C linker for the host machine: cc ld.bfd 2.40-14 00:02:25.191 Host machine cpu family: x86_64 00:02:25.191 Host machine cpu: x86_64 00:02:25.191 Message: ## Building in Developer Mode ## 00:02:25.191 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:25.191 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:25.191 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:25.191 Program python3 found: YES (/usr/bin/python3) 00:02:25.191 Program cat found: YES (/usr/bin/cat) 00:02:25.191 Compiler for C supports arguments -march=native: YES 00:02:25.191 Checking for size of "void *" : 8 00:02:25.191 Checking for size of "void *" : 8 (cached) 00:02:25.191 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:25.191 Library m found: YES 00:02:25.191 Library numa found: YES 00:02:25.191 Has header "numaif.h" : YES 00:02:25.191 Library fdt found: NO 00:02:25.191 Library execinfo found: NO 00:02:25.191 Has header "execinfo.h" : YES 00:02:25.191 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:25.191 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:25.191 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:25.191 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:25.191 Run-time dependency openssl found: YES 3.1.1 00:02:25.191 Run-time dependency libpcap found: YES 1.10.4 00:02:25.191 Has header "pcap.h" with dependency libpcap: YES 00:02:25.191 Compiler for C supports arguments -Wcast-qual: YES 00:02:25.191 Compiler for C supports arguments -Wdeprecated: YES 00:02:25.191 Compiler for C supports arguments -Wformat: YES 00:02:25.191 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:25.191 Compiler for C supports arguments -Wformat-security: NO 00:02:25.191 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:25.191 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:25.191 Compiler for C supports arguments -Wnested-externs: YES 00:02:25.191 Compiler for C supports arguments -Wold-style-definition: YES 00:02:25.191 Compiler for C supports arguments -Wpointer-arith: YES 00:02:25.191 Compiler for C supports arguments -Wsign-compare: YES 00:02:25.191 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:25.191 Compiler for C supports arguments -Wundef: YES 00:02:25.191 Compiler for C supports arguments -Wwrite-strings: YES 00:02:25.191 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:25.191 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:25.191 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:25.191 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:25.191 Program objdump found: YES (/usr/bin/objdump) 00:02:25.191 Compiler for C supports arguments -mavx512f: YES 00:02:25.191 Checking if "AVX512 checking" compiles: YES 00:02:25.191 Fetching value of define "__SSE4_2__" : 1 00:02:25.191 Fetching value of define "__AES__" : 1 00:02:25.191 Fetching value of define "__AVX__" : 1 00:02:25.191 Fetching value of define "__AVX2__" : 1 00:02:25.191 Fetching value of define "__AVX512BW__" : 1 00:02:25.191 Fetching value of define "__AVX512CD__" : 1 00:02:25.191 Fetching value of define "__AVX512DQ__" : 1 00:02:25.191 Fetching value of define "__AVX512F__" : 1 00:02:25.191 Fetching value of define "__AVX512VL__" : 1 00:02:25.191 Fetching value of define "__PCLMUL__" : 1 00:02:25.191 Fetching value of define "__RDRND__" : 1 00:02:25.191 Fetching value of define "__RDSEED__" : 1 00:02:25.191 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:25.191 Fetching value of define "__znver1__" : (undefined) 00:02:25.191 Fetching value of define "__znver2__" : (undefined) 00:02:25.191 Fetching value of define "__znver3__" : (undefined) 00:02:25.191 Fetching value of define "__znver4__" : (undefined) 00:02:25.191 Library asan found: YES 00:02:25.191 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:25.191 Message: lib/log: Defining dependency "log" 00:02:25.191 Message: lib/kvargs: Defining dependency "kvargs" 00:02:25.191 Message: lib/telemetry: Defining dependency "telemetry" 00:02:25.191 Library rt found: YES 00:02:25.191 Checking for function "getentropy" : NO 00:02:25.191 Message: lib/eal: Defining dependency "eal" 00:02:25.191 Message: lib/ring: Defining dependency "ring" 00:02:25.191 Message: lib/rcu: Defining dependency "rcu" 00:02:25.191 Message: lib/mempool: Defining dependency "mempool" 00:02:25.191 Message: lib/mbuf: Defining dependency "mbuf" 00:02:25.191 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:25.191 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:25.191 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:25.191 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:25.191 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:25.191 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:25.191 Compiler for C supports arguments -mpclmul: YES 00:02:25.191 Compiler for C supports arguments -maes: YES 00:02:25.191 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:25.191 Compiler for C supports arguments -mavx512bw: YES 00:02:25.191 Compiler for C supports arguments -mavx512dq: YES 00:02:25.191 Compiler for C supports arguments -mavx512vl: YES 00:02:25.191 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:25.191 Compiler for C supports arguments -mavx2: YES 00:02:25.191 Compiler for C supports arguments -mavx: YES 00:02:25.192 Message: lib/net: Defining dependency "net" 00:02:25.192 Message: lib/meter: Defining dependency "meter" 00:02:25.192 Message: lib/ethdev: Defining dependency "ethdev" 00:02:25.192 Message: lib/pci: Defining dependency "pci" 00:02:25.192 Message: lib/cmdline: Defining dependency "cmdline" 00:02:25.192 Message: lib/hash: Defining dependency "hash" 00:02:25.192 Message: lib/timer: Defining dependency "timer" 00:02:25.192 Message: lib/compressdev: Defining dependency "compressdev" 00:02:25.192 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:25.192 Message: lib/dmadev: Defining dependency "dmadev" 00:02:25.192 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:25.192 Message: lib/power: Defining dependency "power" 00:02:25.192 Message: lib/reorder: Defining dependency "reorder" 00:02:25.192 Message: lib/security: Defining dependency "security" 00:02:25.192 Has header "linux/userfaultfd.h" : YES 00:02:25.192 Has header "linux/vduse.h" : YES 00:02:25.192 Message: lib/vhost: Defining dependency "vhost" 00:02:25.192 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:25.192 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:25.192 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:25.192 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:25.192 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:25.192 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:25.192 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:25.192 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:25.192 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:25.192 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:25.192 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:25.192 Configuring doxy-api-html.conf using configuration 00:02:25.192 Configuring doxy-api-man.conf using configuration 00:02:25.192 Program mandb found: YES (/usr/bin/mandb) 00:02:25.192 Program sphinx-build found: NO 00:02:25.192 Configuring rte_build_config.h using configuration 00:02:25.192 Message: 00:02:25.192 ================= 00:02:25.192 Applications Enabled 00:02:25.192 ================= 00:02:25.192 00:02:25.192 apps: 00:02:25.192 00:02:25.192 00:02:25.192 Message: 00:02:25.192 ================= 00:02:25.192 Libraries Enabled 00:02:25.192 ================= 00:02:25.192 00:02:25.192 libs: 00:02:25.192 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:25.192 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:25.192 cryptodev, dmadev, power, reorder, security, vhost, 00:02:25.192 00:02:25.192 Message: 00:02:25.192 =============== 00:02:25.192 Drivers Enabled 00:02:25.192 =============== 00:02:25.192 00:02:25.192 common: 00:02:25.192 00:02:25.192 bus: 00:02:25.192 pci, vdev, 00:02:25.192 mempool: 00:02:25.192 ring, 00:02:25.192 dma: 00:02:25.192 00:02:25.192 net: 00:02:25.192 00:02:25.192 crypto: 00:02:25.192 00:02:25.192 compress: 00:02:25.192 00:02:25.192 vdpa: 00:02:25.192 00:02:25.192 00:02:25.192 Message: 00:02:25.192 ================= 00:02:25.192 Content Skipped 00:02:25.192 ================= 00:02:25.192 00:02:25.192 apps: 00:02:25.192 dumpcap: explicitly disabled via build config 00:02:25.192 graph: explicitly disabled via build config 00:02:25.192 pdump: explicitly disabled via build config 00:02:25.192 proc-info: explicitly disabled via build config 00:02:25.192 test-acl: explicitly disabled via build config 00:02:25.192 test-bbdev: explicitly disabled via build config 00:02:25.192 test-cmdline: explicitly disabled via build config 00:02:25.192 test-compress-perf: explicitly disabled via build config 00:02:25.192 test-crypto-perf: explicitly disabled via build config 00:02:25.192 test-dma-perf: explicitly disabled via build config 00:02:25.192 test-eventdev: explicitly disabled via build config 00:02:25.192 test-fib: explicitly disabled via build config 00:02:25.192 test-flow-perf: explicitly disabled via build config 00:02:25.192 test-gpudev: explicitly disabled via build config 00:02:25.192 test-mldev: explicitly disabled via build config 00:02:25.192 test-pipeline: explicitly disabled via build config 00:02:25.192 test-pmd: explicitly disabled via build config 00:02:25.192 test-regex: explicitly disabled via build config 00:02:25.192 test-sad: explicitly disabled via build config 00:02:25.192 test-security-perf: explicitly disabled via build config 00:02:25.192 00:02:25.192 libs: 00:02:25.192 argparse: explicitly disabled via build config 00:02:25.192 metrics: explicitly disabled via build config 00:02:25.192 acl: explicitly disabled via build config 00:02:25.192 bbdev: explicitly disabled via build config 00:02:25.192 bitratestats: explicitly disabled via build config 00:02:25.192 bpf: explicitly disabled via build config 00:02:25.192 cfgfile: explicitly disabled via build config 00:02:25.192 distributor: explicitly disabled via build config 00:02:25.192 efd: explicitly disabled via build config 00:02:25.192 eventdev: explicitly disabled via build config 00:02:25.192 dispatcher: explicitly disabled via build config 00:02:25.192 gpudev: explicitly disabled via build config 00:02:25.192 gro: explicitly disabled via build config 00:02:25.192 gso: explicitly disabled via build config 00:02:25.192 ip_frag: explicitly disabled via build config 00:02:25.192 jobstats: explicitly disabled via build config 00:02:25.192 latencystats: explicitly disabled via build config 00:02:25.192 lpm: explicitly disabled via build config 00:02:25.192 member: explicitly disabled via build config 00:02:25.192 pcapng: explicitly disabled via build config 00:02:25.192 rawdev: explicitly disabled via build config 00:02:25.192 regexdev: explicitly disabled via build config 00:02:25.192 mldev: explicitly disabled via build config 00:02:25.192 rib: explicitly disabled via build config 00:02:25.192 sched: explicitly disabled via build config 00:02:25.192 stack: explicitly disabled via build config 00:02:25.192 ipsec: explicitly disabled via build config 00:02:25.192 pdcp: explicitly disabled via build config 00:02:25.192 fib: explicitly disabled via build config 00:02:25.192 port: explicitly disabled via build config 00:02:25.192 pdump: explicitly disabled via build config 00:02:25.192 table: explicitly disabled via build config 00:02:25.192 pipeline: explicitly disabled via build config 00:02:25.192 graph: explicitly disabled via build config 00:02:25.192 node: explicitly disabled via build config 00:02:25.192 00:02:25.192 drivers: 00:02:25.192 common/cpt: not in enabled drivers build config 00:02:25.192 common/dpaax: not in enabled drivers build config 00:02:25.192 common/iavf: not in enabled drivers build config 00:02:25.192 common/idpf: not in enabled drivers build config 00:02:25.192 common/ionic: not in enabled drivers build config 00:02:25.192 common/mvep: not in enabled drivers build config 00:02:25.192 common/octeontx: not in enabled drivers build config 00:02:25.192 bus/auxiliary: not in enabled drivers build config 00:02:25.192 bus/cdx: not in enabled drivers build config 00:02:25.192 bus/dpaa: not in enabled drivers build config 00:02:25.192 bus/fslmc: not in enabled drivers build config 00:02:25.192 bus/ifpga: not in enabled drivers build config 00:02:25.192 bus/platform: not in enabled drivers build config 00:02:25.192 bus/uacce: not in enabled drivers build config 00:02:25.192 bus/vmbus: not in enabled drivers build config 00:02:25.192 common/cnxk: not in enabled drivers build config 00:02:25.192 common/mlx5: not in enabled drivers build config 00:02:25.192 common/nfp: not in enabled drivers build config 00:02:25.192 common/nitrox: not in enabled drivers build config 00:02:25.192 common/qat: not in enabled drivers build config 00:02:25.192 common/sfc_efx: not in enabled drivers build config 00:02:25.192 mempool/bucket: not in enabled drivers build config 00:02:25.192 mempool/cnxk: not in enabled drivers build config 00:02:25.192 mempool/dpaa: not in enabled drivers build config 00:02:25.192 mempool/dpaa2: not in enabled drivers build config 00:02:25.192 mempool/octeontx: not in enabled drivers build config 00:02:25.192 mempool/stack: not in enabled drivers build config 00:02:25.192 dma/cnxk: not in enabled drivers build config 00:02:25.192 dma/dpaa: not in enabled drivers build config 00:02:25.192 dma/dpaa2: not in enabled drivers build config 00:02:25.192 dma/hisilicon: not in enabled drivers build config 00:02:25.192 dma/idxd: not in enabled drivers build config 00:02:25.192 dma/ioat: not in enabled drivers build config 00:02:25.192 dma/skeleton: not in enabled drivers build config 00:02:25.192 net/af_packet: not in enabled drivers build config 00:02:25.192 net/af_xdp: not in enabled drivers build config 00:02:25.192 net/ark: not in enabled drivers build config 00:02:25.192 net/atlantic: not in enabled drivers build config 00:02:25.192 net/avp: not in enabled drivers build config 00:02:25.192 net/axgbe: not in enabled drivers build config 00:02:25.192 net/bnx2x: not in enabled drivers build config 00:02:25.192 net/bnxt: not in enabled drivers build config 00:02:25.192 net/bonding: not in enabled drivers build config 00:02:25.192 net/cnxk: not in enabled drivers build config 00:02:25.192 net/cpfl: not in enabled drivers build config 00:02:25.192 net/cxgbe: not in enabled drivers build config 00:02:25.192 net/dpaa: not in enabled drivers build config 00:02:25.192 net/dpaa2: not in enabled drivers build config 00:02:25.192 net/e1000: not in enabled drivers build config 00:02:25.192 net/ena: not in enabled drivers build config 00:02:25.192 net/enetc: not in enabled drivers build config 00:02:25.192 net/enetfec: not in enabled drivers build config 00:02:25.192 net/enic: not in enabled drivers build config 00:02:25.192 net/failsafe: not in enabled drivers build config 00:02:25.192 net/fm10k: not in enabled drivers build config 00:02:25.192 net/gve: not in enabled drivers build config 00:02:25.192 net/hinic: not in enabled drivers build config 00:02:25.192 net/hns3: not in enabled drivers build config 00:02:25.192 net/i40e: not in enabled drivers build config 00:02:25.192 net/iavf: not in enabled drivers build config 00:02:25.192 net/ice: not in enabled drivers build config 00:02:25.192 net/idpf: not in enabled drivers build config 00:02:25.192 net/igc: not in enabled drivers build config 00:02:25.193 net/ionic: not in enabled drivers build config 00:02:25.193 net/ipn3ke: not in enabled drivers build config 00:02:25.193 net/ixgbe: not in enabled drivers build config 00:02:25.193 net/mana: not in enabled drivers build config 00:02:25.193 net/memif: not in enabled drivers build config 00:02:25.193 net/mlx4: not in enabled drivers build config 00:02:25.193 net/mlx5: not in enabled drivers build config 00:02:25.193 net/mvneta: not in enabled drivers build config 00:02:25.193 net/mvpp2: not in enabled drivers build config 00:02:25.193 net/netvsc: not in enabled drivers build config 00:02:25.193 net/nfb: not in enabled drivers build config 00:02:25.193 net/nfp: not in enabled drivers build config 00:02:25.193 net/ngbe: not in enabled drivers build config 00:02:25.193 net/null: not in enabled drivers build config 00:02:25.193 net/octeontx: not in enabled drivers build config 00:02:25.193 net/octeon_ep: not in enabled drivers build config 00:02:25.193 net/pcap: not in enabled drivers build config 00:02:25.193 net/pfe: not in enabled drivers build config 00:02:25.193 net/qede: not in enabled drivers build config 00:02:25.193 net/ring: not in enabled drivers build config 00:02:25.193 net/sfc: not in enabled drivers build config 00:02:25.193 net/softnic: not in enabled drivers build config 00:02:25.193 net/tap: not in enabled drivers build config 00:02:25.193 net/thunderx: not in enabled drivers build config 00:02:25.193 net/txgbe: not in enabled drivers build config 00:02:25.193 net/vdev_netvsc: not in enabled drivers build config 00:02:25.193 net/vhost: not in enabled drivers build config 00:02:25.193 net/virtio: not in enabled drivers build config 00:02:25.193 net/vmxnet3: not in enabled drivers build config 00:02:25.193 raw/*: missing internal dependency, "rawdev" 00:02:25.193 crypto/armv8: not in enabled drivers build config 00:02:25.193 crypto/bcmfs: not in enabled drivers build config 00:02:25.193 crypto/caam_jr: not in enabled drivers build config 00:02:25.193 crypto/ccp: not in enabled drivers build config 00:02:25.193 crypto/cnxk: not in enabled drivers build config 00:02:25.193 crypto/dpaa_sec: not in enabled drivers build config 00:02:25.193 crypto/dpaa2_sec: not in enabled drivers build config 00:02:25.193 crypto/ipsec_mb: not in enabled drivers build config 00:02:25.193 crypto/mlx5: not in enabled drivers build config 00:02:25.193 crypto/mvsam: not in enabled drivers build config 00:02:25.193 crypto/nitrox: not in enabled drivers build config 00:02:25.193 crypto/null: not in enabled drivers build config 00:02:25.193 crypto/octeontx: not in enabled drivers build config 00:02:25.193 crypto/openssl: not in enabled drivers build config 00:02:25.193 crypto/scheduler: not in enabled drivers build config 00:02:25.193 crypto/uadk: not in enabled drivers build config 00:02:25.193 crypto/virtio: not in enabled drivers build config 00:02:25.193 compress/isal: not in enabled drivers build config 00:02:25.193 compress/mlx5: not in enabled drivers build config 00:02:25.193 compress/nitrox: not in enabled drivers build config 00:02:25.193 compress/octeontx: not in enabled drivers build config 00:02:25.193 compress/zlib: not in enabled drivers build config 00:02:25.193 regex/*: missing internal dependency, "regexdev" 00:02:25.193 ml/*: missing internal dependency, "mldev" 00:02:25.193 vdpa/ifc: not in enabled drivers build config 00:02:25.193 vdpa/mlx5: not in enabled drivers build config 00:02:25.193 vdpa/nfp: not in enabled drivers build config 00:02:25.193 vdpa/sfc: not in enabled drivers build config 00:02:25.193 event/*: missing internal dependency, "eventdev" 00:02:25.193 baseband/*: missing internal dependency, "bbdev" 00:02:25.193 gpu/*: missing internal dependency, "gpudev" 00:02:25.193 00:02:25.193 00:02:25.193 Build targets in project: 85 00:02:25.193 00:02:25.193 DPDK 24.03.0 00:02:25.193 00:02:25.193 User defined options 00:02:25.193 buildtype : debug 00:02:25.193 default_library : shared 00:02:25.193 libdir : lib 00:02:25.193 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:25.193 b_sanitize : address 00:02:25.193 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:25.193 c_link_args : 00:02:25.193 cpu_instruction_set: native 00:02:25.193 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:25.193 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:25.193 enable_docs : false 00:02:25.193 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:25.193 enable_kmods : false 00:02:25.193 max_lcores : 128 00:02:25.193 tests : false 00:02:25.193 00:02:25.193 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:25.193 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:25.193 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:25.193 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:25.193 [3/268] Linking static target lib/librte_kvargs.a 00:02:25.193 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:25.193 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:25.193 [6/268] Linking static target lib/librte_log.a 00:02:25.452 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:25.452 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:25.452 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:25.452 [10/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.452 [11/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:25.452 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:25.452 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:25.452 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:25.452 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:25.452 [16/268] Linking static target lib/librte_telemetry.a 00:02:25.717 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:25.717 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:25.985 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.243 [20/268] Linking target lib/librte_log.so.24.1 00:02:26.243 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:26.243 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:26.243 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:26.243 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:26.243 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:26.243 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:26.243 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:26.243 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:26.502 [29/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:26.502 [30/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.502 [31/268] Linking target lib/librte_kvargs.so.24.1 00:02:26.502 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:26.502 [33/268] Linking target lib/librte_telemetry.so.24.1 00:02:26.762 [34/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:26.762 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:26.762 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:26.762 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:26.762 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:26.762 [39/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:26.762 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:27.021 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:27.021 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:27.021 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:27.021 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:27.021 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:27.280 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:27.280 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:27.280 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:27.280 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:27.539 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:27.539 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:27.539 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:27.540 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:27.799 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:27.799 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:27.799 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:28.058 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:28.058 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:28.058 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:28.058 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:28.058 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:28.058 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:28.058 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:28.318 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:28.318 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:28.318 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:28.577 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:28.578 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:28.578 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:28.578 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:28.837 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:28.837 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:28.837 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:28.837 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:28.837 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:28.837 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:29.096 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:29.096 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:29.096 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:29.096 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:29.355 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:29.355 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:29.355 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:29.355 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:29.614 [85/268] Linking static target lib/librte_eal.a 00:02:29.614 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:29.614 [87/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:29.614 [88/268] Linking static target lib/librte_ring.a 00:02:29.614 [89/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:29.614 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:29.873 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:29.873 [92/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:29.873 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:29.873 [94/268] Linking static target lib/librte_mempool.a 00:02:30.133 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:30.133 [96/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.133 [97/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:30.133 [98/268] Linking static target lib/librte_rcu.a 00:02:30.391 [99/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:30.391 [100/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:30.391 [101/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:30.391 [102/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:30.651 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:30.651 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:30.651 [105/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:30.651 [106/268] Linking static target lib/librte_net.a 00:02:30.651 [107/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:30.651 [108/268] Linking static target lib/librte_meter.a 00:02:30.651 [109/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.910 [110/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:30.910 [111/268] Linking static target lib/librte_mbuf.a 00:02:30.910 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:30.910 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:31.170 [114/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.170 [115/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.170 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:31.170 [117/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.170 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:31.429 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:31.688 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:31.947 [121/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.947 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:31.947 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:31.947 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:31.947 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:31.947 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:31.947 [127/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:32.207 [128/268] Linking static target lib/librte_pci.a 00:02:32.207 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:32.207 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:32.207 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:32.207 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:32.467 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:32.467 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:32.467 [135/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.467 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:32.467 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:32.467 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:32.467 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:32.467 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:32.467 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:32.727 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:32.727 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:32.727 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:32.727 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:32.727 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:32.727 [147/268] Linking static target lib/librte_cmdline.a 00:02:32.986 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:32.986 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:33.245 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:33.245 [151/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:33.245 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:33.245 [153/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:33.245 [154/268] Linking static target lib/librte_timer.a 00:02:33.505 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:33.505 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:33.505 [157/268] Linking static target lib/librte_compressdev.a 00:02:33.764 [158/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:33.764 [159/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:33.764 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:33.764 [161/268] Linking static target lib/librte_ethdev.a 00:02:33.764 [162/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:34.024 [163/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:34.024 [164/268] Linking static target lib/librte_hash.a 00:02:34.024 [165/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.024 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:34.024 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:34.024 [168/268] Linking static target lib/librte_dmadev.a 00:02:34.283 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:34.283 [170/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.283 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:34.283 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:34.543 [173/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.543 [174/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:34.803 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:34.803 [176/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:34.803 [177/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:35.063 [178/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:35.063 [179/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.063 [180/268] Linking static target lib/librte_cryptodev.a 00:02:35.063 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:35.063 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:35.063 [183/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.063 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:35.323 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:35.323 [186/268] Linking static target lib/librte_power.a 00:02:35.583 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:35.583 [188/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:35.583 [189/268] Linking static target lib/librte_reorder.a 00:02:35.583 [190/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:35.583 [191/268] Linking static target lib/librte_security.a 00:02:35.583 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:35.852 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:36.127 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:36.127 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.387 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.387 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.387 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:36.648 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:36.648 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:36.908 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:36.908 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:36.908 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:37.168 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:37.168 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:37.168 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:37.168 [207/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:37.168 [208/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:37.426 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:37.426 [210/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:37.426 [211/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.684 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:37.684 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:37.684 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:37.684 [215/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:37.684 [216/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:37.684 [217/268] Linking static target drivers/librte_bus_vdev.a 00:02:37.684 [218/268] Linking static target drivers/librte_bus_pci.a 00:02:37.684 [219/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:37.941 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:37.941 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:37.941 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.941 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:38.199 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:38.199 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:38.199 [226/268] Linking static target drivers/librte_mempool_ring.a 00:02:38.199 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.574 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:40.509 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.509 [230/268] Linking target lib/librte_eal.so.24.1 00:02:40.766 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:40.766 [232/268] Linking target lib/librte_meter.so.24.1 00:02:40.766 [233/268] Linking target lib/librte_pci.so.24.1 00:02:40.766 [234/268] Linking target lib/librte_ring.so.24.1 00:02:40.766 [235/268] Linking target lib/librte_timer.so.24.1 00:02:40.766 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:40.766 [237/268] Linking target lib/librte_dmadev.so.24.1 00:02:40.766 [238/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:40.766 [239/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:41.024 [240/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:41.024 [241/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:41.024 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:41.024 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:41.024 [244/268] Linking target lib/librte_mempool.so.24.1 00:02:41.024 [245/268] Linking target lib/librte_rcu.so.24.1 00:02:41.024 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:41.024 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:41.282 [248/268] Linking target lib/librte_mbuf.so.24.1 00:02:41.282 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:41.282 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:41.282 [251/268] Linking target lib/librte_compressdev.so.24.1 00:02:41.282 [252/268] Linking target lib/librte_net.so.24.1 00:02:41.282 [253/268] Linking target lib/librte_reorder.so.24.1 00:02:41.282 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:02:41.539 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:41.539 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:41.539 [257/268] Linking target lib/librte_hash.so.24.1 00:02:41.539 [258/268] Linking target lib/librte_security.so.24.1 00:02:41.539 [259/268] Linking target lib/librte_cmdline.so.24.1 00:02:41.796 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:42.729 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.729 [262/268] Linking target lib/librte_ethdev.so.24.1 00:02:42.989 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:42.989 [264/268] Linking target lib/librte_power.so.24.1 00:02:43.274 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:43.533 [266/268] Linking static target lib/librte_vhost.a 00:02:46.068 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.068 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:46.068 INFO: autodetecting backend as ninja 00:02:46.068 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:08.030 CC lib/ut_mock/mock.o 00:03:08.030 CC lib/log/log.o 00:03:08.030 CC lib/log/log_flags.o 00:03:08.030 CC lib/log/log_deprecated.o 00:03:08.030 CC lib/ut/ut.o 00:03:08.030 LIB libspdk_ut_mock.a 00:03:08.030 LIB libspdk_ut.a 00:03:08.030 SO libspdk_ut.so.2.0 00:03:08.030 SO libspdk_ut_mock.so.6.0 00:03:08.030 LIB libspdk_log.a 00:03:08.030 SYMLINK libspdk_ut.so 00:03:08.030 SYMLINK libspdk_ut_mock.so 00:03:08.030 SO libspdk_log.so.7.1 00:03:08.030 SYMLINK libspdk_log.so 00:03:08.030 CC lib/ioat/ioat.o 00:03:08.030 CC lib/dma/dma.o 00:03:08.030 CXX lib/trace_parser/trace.o 00:03:08.030 CC lib/util/base64.o 00:03:08.030 CC lib/util/bit_array.o 00:03:08.030 CC lib/util/crc16.o 00:03:08.030 CC lib/util/crc32.o 00:03:08.030 CC lib/util/cpuset.o 00:03:08.030 CC lib/util/crc32c.o 00:03:08.030 CC lib/vfio_user/host/vfio_user_pci.o 00:03:08.030 CC lib/vfio_user/host/vfio_user.o 00:03:08.030 CC lib/util/crc32_ieee.o 00:03:08.030 CC lib/util/crc64.o 00:03:08.030 CC lib/util/dif.o 00:03:08.030 CC lib/util/fd.o 00:03:08.030 LIB libspdk_dma.a 00:03:08.030 CC lib/util/fd_group.o 00:03:08.030 SO libspdk_dma.so.5.0 00:03:08.030 LIB libspdk_ioat.a 00:03:08.030 CC lib/util/file.o 00:03:08.030 SO libspdk_ioat.so.7.0 00:03:08.030 CC lib/util/hexlify.o 00:03:08.030 SYMLINK libspdk_dma.so 00:03:08.030 CC lib/util/iov.o 00:03:08.030 CC lib/util/math.o 00:03:08.288 SYMLINK libspdk_ioat.so 00:03:08.288 CC lib/util/net.o 00:03:08.288 CC lib/util/pipe.o 00:03:08.288 LIB libspdk_vfio_user.a 00:03:08.288 SO libspdk_vfio_user.so.5.0 00:03:08.288 CC lib/util/strerror_tls.o 00:03:08.288 CC lib/util/string.o 00:03:08.288 CC lib/util/uuid.o 00:03:08.288 SYMLINK libspdk_vfio_user.so 00:03:08.288 CC lib/util/xor.o 00:03:08.288 CC lib/util/zipf.o 00:03:08.288 CC lib/util/md5.o 00:03:08.854 LIB libspdk_util.a 00:03:08.854 SO libspdk_util.so.10.1 00:03:08.854 LIB libspdk_trace_parser.a 00:03:08.854 SYMLINK libspdk_util.so 00:03:09.113 SO libspdk_trace_parser.so.6.0 00:03:09.113 SYMLINK libspdk_trace_parser.so 00:03:09.113 CC lib/json/json_util.o 00:03:09.113 CC lib/json/json_write.o 00:03:09.113 CC lib/json/json_parse.o 00:03:09.113 CC lib/vmd/vmd.o 00:03:09.113 CC lib/vmd/led.o 00:03:09.113 CC lib/rdma_utils/rdma_utils.o 00:03:09.113 CC lib/idxd/idxd.o 00:03:09.113 CC lib/idxd/idxd_user.o 00:03:09.113 CC lib/env_dpdk/env.o 00:03:09.113 CC lib/conf/conf.o 00:03:09.372 CC lib/idxd/idxd_kernel.o 00:03:09.372 CC lib/env_dpdk/memory.o 00:03:09.372 LIB libspdk_rdma_utils.a 00:03:09.372 CC lib/env_dpdk/pci.o 00:03:09.372 LIB libspdk_conf.a 00:03:09.372 LIB libspdk_json.a 00:03:09.372 CC lib/env_dpdk/init.o 00:03:09.372 CC lib/env_dpdk/threads.o 00:03:09.372 SO libspdk_rdma_utils.so.1.0 00:03:09.631 SO libspdk_conf.so.6.0 00:03:09.631 SO libspdk_json.so.6.0 00:03:09.631 SYMLINK libspdk_rdma_utils.so 00:03:09.631 SYMLINK libspdk_conf.so 00:03:09.631 CC lib/env_dpdk/pci_ioat.o 00:03:09.631 SYMLINK libspdk_json.so 00:03:09.631 CC lib/env_dpdk/pci_virtio.o 00:03:09.631 CC lib/env_dpdk/pci_vmd.o 00:03:09.631 CC lib/env_dpdk/pci_idxd.o 00:03:09.631 CC lib/rdma_provider/common.o 00:03:09.631 CC lib/env_dpdk/pci_event.o 00:03:09.890 CC lib/env_dpdk/sigbus_handler.o 00:03:09.890 CC lib/env_dpdk/pci_dpdk.o 00:03:09.890 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:09.890 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:09.890 LIB libspdk_idxd.a 00:03:09.890 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:09.890 LIB libspdk_vmd.a 00:03:09.890 SO libspdk_idxd.so.12.1 00:03:09.890 SO libspdk_vmd.so.6.0 00:03:10.150 SYMLINK libspdk_idxd.so 00:03:10.150 SYMLINK libspdk_vmd.so 00:03:10.150 CC lib/jsonrpc/jsonrpc_server.o 00:03:10.150 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:10.150 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:10.150 LIB libspdk_rdma_provider.a 00:03:10.150 CC lib/jsonrpc/jsonrpc_client.o 00:03:10.150 SO libspdk_rdma_provider.so.7.0 00:03:10.409 SYMLINK libspdk_rdma_provider.so 00:03:10.669 LIB libspdk_jsonrpc.a 00:03:10.669 SO libspdk_jsonrpc.so.6.0 00:03:10.669 SYMLINK libspdk_jsonrpc.so 00:03:11.238 LIB libspdk_env_dpdk.a 00:03:11.238 CC lib/rpc/rpc.o 00:03:11.238 SO libspdk_env_dpdk.so.15.1 00:03:11.238 LIB libspdk_rpc.a 00:03:11.238 SYMLINK libspdk_env_dpdk.so 00:03:11.498 SO libspdk_rpc.so.6.0 00:03:11.498 SYMLINK libspdk_rpc.so 00:03:11.757 CC lib/keyring/keyring_rpc.o 00:03:11.757 CC lib/keyring/keyring.o 00:03:11.757 CC lib/notify/notify.o 00:03:11.757 CC lib/notify/notify_rpc.o 00:03:11.757 CC lib/trace/trace_flags.o 00:03:11.757 CC lib/trace/trace_rpc.o 00:03:11.757 CC lib/trace/trace.o 00:03:12.016 LIB libspdk_notify.a 00:03:12.016 LIB libspdk_keyring.a 00:03:12.016 SO libspdk_notify.so.6.0 00:03:12.016 SO libspdk_keyring.so.2.0 00:03:12.016 LIB libspdk_trace.a 00:03:12.274 SYMLINK libspdk_notify.so 00:03:12.274 SYMLINK libspdk_keyring.so 00:03:12.274 SO libspdk_trace.so.11.0 00:03:12.274 SYMLINK libspdk_trace.so 00:03:12.532 CC lib/thread/thread.o 00:03:12.532 CC lib/thread/iobuf.o 00:03:12.532 CC lib/sock/sock.o 00:03:12.532 CC lib/sock/sock_rpc.o 00:03:13.099 LIB libspdk_sock.a 00:03:13.099 SO libspdk_sock.so.10.0 00:03:13.357 SYMLINK libspdk_sock.so 00:03:13.616 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:13.616 CC lib/nvme/nvme_ctrlr.o 00:03:13.616 CC lib/nvme/nvme_fabric.o 00:03:13.616 CC lib/nvme/nvme_ns_cmd.o 00:03:13.616 CC lib/nvme/nvme_pcie.o 00:03:13.616 CC lib/nvme/nvme_ns.o 00:03:13.616 CC lib/nvme/nvme_pcie_common.o 00:03:13.616 CC lib/nvme/nvme.o 00:03:13.616 CC lib/nvme/nvme_qpair.o 00:03:14.551 CC lib/nvme/nvme_quirks.o 00:03:14.551 CC lib/nvme/nvme_transport.o 00:03:14.551 LIB libspdk_thread.a 00:03:14.551 CC lib/nvme/nvme_discovery.o 00:03:14.551 SO libspdk_thread.so.11.0 00:03:14.551 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:14.551 SYMLINK libspdk_thread.so 00:03:14.551 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:14.810 CC lib/nvme/nvme_tcp.o 00:03:14.810 CC lib/nvme/nvme_opal.o 00:03:14.810 CC lib/nvme/nvme_io_msg.o 00:03:15.069 CC lib/nvme/nvme_poll_group.o 00:03:15.069 CC lib/nvme/nvme_zns.o 00:03:15.069 CC lib/nvme/nvme_stubs.o 00:03:15.327 CC lib/nvme/nvme_auth.o 00:03:15.327 CC lib/nvme/nvme_cuse.o 00:03:15.327 CC lib/accel/accel.o 00:03:15.327 CC lib/nvme/nvme_rdma.o 00:03:15.587 CC lib/accel/accel_rpc.o 00:03:15.587 CC lib/accel/accel_sw.o 00:03:15.845 CC lib/init/json_config.o 00:03:15.846 CC lib/blob/blobstore.o 00:03:15.846 CC lib/init/subsystem.o 00:03:16.103 CC lib/init/subsystem_rpc.o 00:03:16.103 CC lib/init/rpc.o 00:03:16.361 CC lib/blob/request.o 00:03:16.361 CC lib/blob/zeroes.o 00:03:16.361 LIB libspdk_init.a 00:03:16.362 CC lib/virtio/virtio.o 00:03:16.362 CC lib/virtio/virtio_vhost_user.o 00:03:16.362 SO libspdk_init.so.6.0 00:03:16.620 CC lib/virtio/virtio_vfio_user.o 00:03:16.620 SYMLINK libspdk_init.so 00:03:16.620 CC lib/virtio/virtio_pci.o 00:03:16.620 CC lib/fsdev/fsdev.o 00:03:16.620 CC lib/event/app.o 00:03:16.620 CC lib/blob/blob_bs_dev.o 00:03:16.620 CC lib/fsdev/fsdev_io.o 00:03:16.878 CC lib/fsdev/fsdev_rpc.o 00:03:16.878 CC lib/event/reactor.o 00:03:16.878 LIB libspdk_accel.a 00:03:16.878 SO libspdk_accel.so.16.0 00:03:16.878 CC lib/event/log_rpc.o 00:03:16.878 SYMLINK libspdk_accel.so 00:03:16.878 CC lib/event/app_rpc.o 00:03:16.878 LIB libspdk_virtio.a 00:03:17.137 CC lib/event/scheduler_static.o 00:03:17.137 SO libspdk_virtio.so.7.0 00:03:17.137 SYMLINK libspdk_virtio.so 00:03:17.137 LIB libspdk_nvme.a 00:03:17.397 LIB libspdk_event.a 00:03:17.397 CC lib/bdev/bdev_rpc.o 00:03:17.397 CC lib/bdev/bdev.o 00:03:17.397 CC lib/bdev/part.o 00:03:17.397 CC lib/bdev/bdev_zone.o 00:03:17.397 CC lib/bdev/scsi_nvme.o 00:03:17.397 SO libspdk_nvme.so.15.0 00:03:17.397 SO libspdk_event.so.14.0 00:03:17.397 LIB libspdk_fsdev.a 00:03:17.656 SO libspdk_fsdev.so.2.0 00:03:17.656 SYMLINK libspdk_event.so 00:03:17.656 SYMLINK libspdk_fsdev.so 00:03:17.916 SYMLINK libspdk_nvme.so 00:03:17.916 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:18.856 LIB libspdk_fuse_dispatcher.a 00:03:18.856 SO libspdk_fuse_dispatcher.so.1.0 00:03:18.856 SYMLINK libspdk_fuse_dispatcher.so 00:03:20.804 LIB libspdk_blob.a 00:03:20.804 SO libspdk_blob.so.11.0 00:03:20.804 SYMLINK libspdk_blob.so 00:03:20.804 LIB libspdk_bdev.a 00:03:20.804 SO libspdk_bdev.so.17.0 00:03:20.805 SYMLINK libspdk_bdev.so 00:03:20.805 CC lib/blobfs/blobfs.o 00:03:20.805 CC lib/blobfs/tree.o 00:03:20.805 CC lib/lvol/lvol.o 00:03:21.064 CC lib/scsi/dev.o 00:03:21.064 CC lib/scsi/lun.o 00:03:21.064 CC lib/scsi/port.o 00:03:21.064 CC lib/ublk/ublk.o 00:03:21.064 CC lib/ftl/ftl_core.o 00:03:21.064 CC lib/nvmf/ctrlr.o 00:03:21.064 CC lib/nbd/nbd.o 00:03:21.064 CC lib/scsi/scsi.o 00:03:21.323 CC lib/ftl/ftl_init.o 00:03:21.323 CC lib/ftl/ftl_layout.o 00:03:21.323 CC lib/ftl/ftl_debug.o 00:03:21.323 CC lib/scsi/scsi_bdev.o 00:03:21.323 CC lib/ftl/ftl_io.o 00:03:21.581 CC lib/ublk/ublk_rpc.o 00:03:21.581 CC lib/nbd/nbd_rpc.o 00:03:21.581 CC lib/ftl/ftl_sb.o 00:03:21.581 CC lib/ftl/ftl_l2p.o 00:03:21.581 CC lib/ftl/ftl_l2p_flat.o 00:03:21.840 CC lib/nvmf/ctrlr_discovery.o 00:03:21.840 LIB libspdk_ublk.a 00:03:21.840 LIB libspdk_nbd.a 00:03:21.840 SO libspdk_ublk.so.3.0 00:03:21.840 SO libspdk_nbd.so.7.0 00:03:21.840 CC lib/nvmf/ctrlr_bdev.o 00:03:21.840 CC lib/nvmf/subsystem.o 00:03:21.840 LIB libspdk_blobfs.a 00:03:21.840 SYMLINK libspdk_ublk.so 00:03:21.840 CC lib/scsi/scsi_pr.o 00:03:21.840 SYMLINK libspdk_nbd.so 00:03:21.840 CC lib/ftl/ftl_nv_cache.o 00:03:21.840 CC lib/ftl/ftl_band.o 00:03:21.840 SO libspdk_blobfs.so.10.0 00:03:21.840 CC lib/ftl/ftl_band_ops.o 00:03:22.098 SYMLINK libspdk_blobfs.so 00:03:22.098 CC lib/ftl/ftl_writer.o 00:03:22.098 LIB libspdk_lvol.a 00:03:22.098 SO libspdk_lvol.so.10.0 00:03:22.098 SYMLINK libspdk_lvol.so 00:03:22.098 CC lib/nvmf/nvmf.o 00:03:22.356 CC lib/scsi/scsi_rpc.o 00:03:22.356 CC lib/nvmf/nvmf_rpc.o 00:03:22.356 CC lib/ftl/ftl_rq.o 00:03:22.356 CC lib/nvmf/transport.o 00:03:22.356 CC lib/ftl/ftl_reloc.o 00:03:22.356 CC lib/scsi/task.o 00:03:22.614 CC lib/nvmf/tcp.o 00:03:22.614 LIB libspdk_scsi.a 00:03:22.614 CC lib/nvmf/stubs.o 00:03:22.614 SO libspdk_scsi.so.9.0 00:03:22.873 SYMLINK libspdk_scsi.so 00:03:22.873 CC lib/ftl/ftl_l2p_cache.o 00:03:22.873 CC lib/ftl/ftl_p2l.o 00:03:23.131 CC lib/nvmf/mdns_server.o 00:03:23.131 CC lib/nvmf/rdma.o 00:03:23.131 CC lib/nvmf/auth.o 00:03:23.131 CC lib/ftl/ftl_p2l_log.o 00:03:23.390 CC lib/ftl/mngt/ftl_mngt.o 00:03:23.390 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:23.390 CC lib/iscsi/conn.o 00:03:23.390 CC lib/iscsi/init_grp.o 00:03:23.390 CC lib/iscsi/iscsi.o 00:03:23.649 CC lib/iscsi/param.o 00:03:23.649 CC lib/iscsi/portal_grp.o 00:03:23.649 CC lib/iscsi/tgt_node.o 00:03:23.649 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:23.649 CC lib/iscsi/iscsi_subsystem.o 00:03:23.908 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:23.909 CC lib/iscsi/iscsi_rpc.o 00:03:23.909 CC lib/iscsi/task.o 00:03:24.168 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:24.168 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:24.168 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:24.168 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:24.168 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:24.168 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:24.426 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:24.426 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:24.426 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:24.426 CC lib/ftl/utils/ftl_conf.o 00:03:24.427 CC lib/ftl/utils/ftl_md.o 00:03:24.427 CC lib/ftl/utils/ftl_mempool.o 00:03:24.427 CC lib/vhost/vhost.o 00:03:24.427 CC lib/ftl/utils/ftl_bitmap.o 00:03:24.427 CC lib/ftl/utils/ftl_property.o 00:03:24.427 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:24.685 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:24.685 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:24.685 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:24.685 CC lib/vhost/vhost_rpc.o 00:03:24.685 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:24.685 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:24.944 CC lib/vhost/vhost_scsi.o 00:03:24.944 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:24.944 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:24.944 CC lib/vhost/vhost_blk.o 00:03:24.944 CC lib/vhost/rte_vhost_user.o 00:03:24.944 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:24.944 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:24.944 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:25.202 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:25.202 LIB libspdk_iscsi.a 00:03:25.202 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:25.202 SO libspdk_iscsi.so.8.0 00:03:25.461 CC lib/ftl/base/ftl_base_dev.o 00:03:25.461 CC lib/ftl/base/ftl_base_bdev.o 00:03:25.461 CC lib/ftl/ftl_trace.o 00:03:25.461 SYMLINK libspdk_iscsi.so 00:03:25.720 LIB libspdk_ftl.a 00:03:25.720 LIB libspdk_nvmf.a 00:03:25.720 SO libspdk_nvmf.so.20.0 00:03:25.979 SO libspdk_ftl.so.9.0 00:03:25.979 LIB libspdk_vhost.a 00:03:25.979 SYMLINK libspdk_nvmf.so 00:03:26.238 SO libspdk_vhost.so.8.0 00:03:26.238 SYMLINK libspdk_ftl.so 00:03:26.238 SYMLINK libspdk_vhost.so 00:03:26.497 CC module/env_dpdk/env_dpdk_rpc.o 00:03:26.756 CC module/blob/bdev/blob_bdev.o 00:03:26.757 CC module/keyring/file/keyring.o 00:03:26.757 CC module/keyring/linux/keyring.o 00:03:26.757 CC module/accel/error/accel_error.o 00:03:26.757 CC module/scheduler/gscheduler/gscheduler.o 00:03:26.757 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:26.757 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:26.757 CC module/fsdev/aio/fsdev_aio.o 00:03:26.757 CC module/sock/posix/posix.o 00:03:26.757 LIB libspdk_env_dpdk_rpc.a 00:03:26.757 SO libspdk_env_dpdk_rpc.so.6.0 00:03:26.757 SYMLINK libspdk_env_dpdk_rpc.so 00:03:26.757 CC module/keyring/linux/keyring_rpc.o 00:03:26.757 CC module/keyring/file/keyring_rpc.o 00:03:26.757 LIB libspdk_scheduler_dpdk_governor.a 00:03:26.757 LIB libspdk_scheduler_gscheduler.a 00:03:26.757 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:26.757 SO libspdk_scheduler_gscheduler.so.4.0 00:03:26.757 CC module/accel/error/accel_error_rpc.o 00:03:27.016 LIB libspdk_scheduler_dynamic.a 00:03:27.016 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:27.016 SO libspdk_scheduler_dynamic.so.4.0 00:03:27.016 LIB libspdk_keyring_linux.a 00:03:27.016 SYMLINK libspdk_scheduler_gscheduler.so 00:03:27.016 CC module/accel/ioat/accel_ioat.o 00:03:27.016 LIB libspdk_blob_bdev.a 00:03:27.016 LIB libspdk_keyring_file.a 00:03:27.016 SO libspdk_keyring_linux.so.1.0 00:03:27.016 SO libspdk_blob_bdev.so.11.0 00:03:27.016 SYMLINK libspdk_scheduler_dynamic.so 00:03:27.016 SO libspdk_keyring_file.so.2.0 00:03:27.016 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:27.016 LIB libspdk_accel_error.a 00:03:27.016 SYMLINK libspdk_keyring_linux.so 00:03:27.016 SO libspdk_accel_error.so.2.0 00:03:27.016 SYMLINK libspdk_blob_bdev.so 00:03:27.016 CC module/accel/ioat/accel_ioat_rpc.o 00:03:27.016 SYMLINK libspdk_keyring_file.so 00:03:27.016 CC module/accel/dsa/accel_dsa.o 00:03:27.016 CC module/accel/iaa/accel_iaa.o 00:03:27.016 SYMLINK libspdk_accel_error.so 00:03:27.016 CC module/accel/iaa/accel_iaa_rpc.o 00:03:27.016 CC module/fsdev/aio/linux_aio_mgr.o 00:03:27.275 LIB libspdk_accel_ioat.a 00:03:27.275 SO libspdk_accel_ioat.so.6.0 00:03:27.275 CC module/bdev/delay/vbdev_delay.o 00:03:27.275 SYMLINK libspdk_accel_ioat.so 00:03:27.275 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:27.275 CC module/bdev/error/vbdev_error.o 00:03:27.275 CC module/blobfs/bdev/blobfs_bdev.o 00:03:27.275 LIB libspdk_accel_iaa.a 00:03:27.275 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:27.275 SO libspdk_accel_iaa.so.3.0 00:03:27.275 CC module/accel/dsa/accel_dsa_rpc.o 00:03:27.275 CC module/bdev/gpt/gpt.o 00:03:27.533 LIB libspdk_fsdev_aio.a 00:03:27.533 SYMLINK libspdk_accel_iaa.so 00:03:27.533 CC module/bdev/gpt/vbdev_gpt.o 00:03:27.533 CC module/bdev/error/vbdev_error_rpc.o 00:03:27.533 SO libspdk_fsdev_aio.so.1.0 00:03:27.533 LIB libspdk_sock_posix.a 00:03:27.533 LIB libspdk_blobfs_bdev.a 00:03:27.533 SO libspdk_sock_posix.so.6.0 00:03:27.533 LIB libspdk_accel_dsa.a 00:03:27.533 SYMLINK libspdk_fsdev_aio.so 00:03:27.533 SO libspdk_blobfs_bdev.so.6.0 00:03:27.533 SO libspdk_accel_dsa.so.5.0 00:03:27.533 CC module/bdev/lvol/vbdev_lvol.o 00:03:27.533 SYMLINK libspdk_sock_posix.so 00:03:27.533 LIB libspdk_bdev_error.a 00:03:27.533 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:27.533 SO libspdk_bdev_error.so.6.0 00:03:27.533 SYMLINK libspdk_accel_dsa.so 00:03:27.533 SYMLINK libspdk_blobfs_bdev.so 00:03:27.533 LIB libspdk_bdev_delay.a 00:03:27.793 SO libspdk_bdev_delay.so.6.0 00:03:27.793 SYMLINK libspdk_bdev_error.so 00:03:27.793 CC module/bdev/malloc/bdev_malloc.o 00:03:27.793 LIB libspdk_bdev_gpt.a 00:03:27.793 CC module/bdev/null/bdev_null.o 00:03:27.793 SO libspdk_bdev_gpt.so.6.0 00:03:27.793 CC module/bdev/nvme/bdev_nvme.o 00:03:27.793 SYMLINK libspdk_bdev_delay.so 00:03:27.793 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:27.793 CC module/bdev/raid/bdev_raid.o 00:03:27.793 CC module/bdev/passthru/vbdev_passthru.o 00:03:27.793 CC module/bdev/split/vbdev_split.o 00:03:27.793 SYMLINK libspdk_bdev_gpt.so 00:03:27.793 CC module/bdev/split/vbdev_split_rpc.o 00:03:28.052 CC module/bdev/null/bdev_null_rpc.o 00:03:28.052 LIB libspdk_bdev_split.a 00:03:28.052 SO libspdk_bdev_split.so.6.0 00:03:28.052 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:28.052 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:28.052 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:28.052 LIB libspdk_bdev_lvol.a 00:03:28.052 CC module/bdev/aio/bdev_aio.o 00:03:28.052 SYMLINK libspdk_bdev_split.so 00:03:28.052 CC module/bdev/aio/bdev_aio_rpc.o 00:03:28.311 SO libspdk_bdev_lvol.so.6.0 00:03:28.311 LIB libspdk_bdev_null.a 00:03:28.311 SYMLINK libspdk_bdev_lvol.so 00:03:28.311 SO libspdk_bdev_null.so.6.0 00:03:28.311 LIB libspdk_bdev_malloc.a 00:03:28.311 LIB libspdk_bdev_passthru.a 00:03:28.312 SO libspdk_bdev_malloc.so.6.0 00:03:28.312 SO libspdk_bdev_passthru.so.6.0 00:03:28.312 SYMLINK libspdk_bdev_null.so 00:03:28.312 CC module/bdev/raid/bdev_raid_rpc.o 00:03:28.312 SYMLINK libspdk_bdev_malloc.so 00:03:28.312 SYMLINK libspdk_bdev_passthru.so 00:03:28.312 CC module/bdev/nvme/nvme_rpc.o 00:03:28.571 CC module/bdev/ftl/bdev_ftl.o 00:03:28.571 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:28.571 CC module/bdev/iscsi/bdev_iscsi.o 00:03:28.571 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:28.571 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:28.571 LIB libspdk_bdev_aio.a 00:03:28.571 SO libspdk_bdev_aio.so.6.0 00:03:28.571 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:28.571 SYMLINK libspdk_bdev_aio.so 00:03:28.571 LIB libspdk_bdev_zone_block.a 00:03:28.571 CC module/bdev/nvme/bdev_mdns_client.o 00:03:28.571 SO libspdk_bdev_zone_block.so.6.0 00:03:28.571 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:28.571 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:28.831 SYMLINK libspdk_bdev_zone_block.so 00:03:28.831 CC module/bdev/nvme/vbdev_opal.o 00:03:28.831 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:28.831 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:28.831 LIB libspdk_bdev_iscsi.a 00:03:28.831 LIB libspdk_bdev_ftl.a 00:03:28.831 CC module/bdev/raid/bdev_raid_sb.o 00:03:28.831 SO libspdk_bdev_iscsi.so.6.0 00:03:28.831 SO libspdk_bdev_ftl.so.6.0 00:03:28.831 SYMLINK libspdk_bdev_iscsi.so 00:03:28.831 CC module/bdev/raid/raid0.o 00:03:29.096 CC module/bdev/raid/raid1.o 00:03:29.096 CC module/bdev/raid/concat.o 00:03:29.096 CC module/bdev/raid/raid5f.o 00:03:29.096 SYMLINK libspdk_bdev_ftl.so 00:03:29.096 LIB libspdk_bdev_virtio.a 00:03:29.096 SO libspdk_bdev_virtio.so.6.0 00:03:29.362 SYMLINK libspdk_bdev_virtio.so 00:03:29.622 LIB libspdk_bdev_raid.a 00:03:29.622 SO libspdk_bdev_raid.so.6.0 00:03:29.622 SYMLINK libspdk_bdev_raid.so 00:03:30.561 LIB libspdk_bdev_nvme.a 00:03:30.820 SO libspdk_bdev_nvme.so.7.1 00:03:30.820 SYMLINK libspdk_bdev_nvme.so 00:03:31.390 CC module/event/subsystems/fsdev/fsdev.o 00:03:31.390 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:31.390 CC module/event/subsystems/keyring/keyring.o 00:03:31.390 CC module/event/subsystems/vmd/vmd.o 00:03:31.390 CC module/event/subsystems/sock/sock.o 00:03:31.390 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:31.390 CC module/event/subsystems/iobuf/iobuf.o 00:03:31.390 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:31.390 CC module/event/subsystems/scheduler/scheduler.o 00:03:31.649 LIB libspdk_event_vhost_blk.a 00:03:31.649 LIB libspdk_event_fsdev.a 00:03:31.649 LIB libspdk_event_vmd.a 00:03:31.649 LIB libspdk_event_sock.a 00:03:31.649 LIB libspdk_event_scheduler.a 00:03:31.649 LIB libspdk_event_keyring.a 00:03:31.649 LIB libspdk_event_iobuf.a 00:03:31.649 SO libspdk_event_fsdev.so.1.0 00:03:31.649 SO libspdk_event_vhost_blk.so.3.0 00:03:31.649 SO libspdk_event_sock.so.5.0 00:03:31.649 SO libspdk_event_vmd.so.6.0 00:03:31.649 SO libspdk_event_scheduler.so.4.0 00:03:31.649 SO libspdk_event_keyring.so.1.0 00:03:31.649 SO libspdk_event_iobuf.so.3.0 00:03:31.649 SYMLINK libspdk_event_fsdev.so 00:03:31.649 SYMLINK libspdk_event_sock.so 00:03:31.649 SYMLINK libspdk_event_vhost_blk.so 00:03:31.649 SYMLINK libspdk_event_keyring.so 00:03:31.649 SYMLINK libspdk_event_scheduler.so 00:03:31.649 SYMLINK libspdk_event_vmd.so 00:03:31.649 SYMLINK libspdk_event_iobuf.so 00:03:31.909 CC module/event/subsystems/accel/accel.o 00:03:32.170 LIB libspdk_event_accel.a 00:03:32.170 SO libspdk_event_accel.so.6.0 00:03:32.430 SYMLINK libspdk_event_accel.so 00:03:32.689 CC module/event/subsystems/bdev/bdev.o 00:03:32.949 LIB libspdk_event_bdev.a 00:03:32.949 SO libspdk_event_bdev.so.6.0 00:03:32.949 SYMLINK libspdk_event_bdev.so 00:03:33.209 CC module/event/subsystems/ublk/ublk.o 00:03:33.469 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:33.469 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:33.469 CC module/event/subsystems/nbd/nbd.o 00:03:33.469 CC module/event/subsystems/scsi/scsi.o 00:03:33.469 LIB libspdk_event_ublk.a 00:03:33.469 LIB libspdk_event_nbd.a 00:03:33.469 SO libspdk_event_ublk.so.3.0 00:03:33.469 SO libspdk_event_nbd.so.6.0 00:03:33.469 LIB libspdk_event_scsi.a 00:03:33.469 SO libspdk_event_scsi.so.6.0 00:03:33.469 SYMLINK libspdk_event_ublk.so 00:03:33.729 LIB libspdk_event_nvmf.a 00:03:33.729 SYMLINK libspdk_event_nbd.so 00:03:33.729 SYMLINK libspdk_event_scsi.so 00:03:33.729 SO libspdk_event_nvmf.so.6.0 00:03:33.729 SYMLINK libspdk_event_nvmf.so 00:03:33.988 CC module/event/subsystems/iscsi/iscsi.o 00:03:33.988 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:34.247 LIB libspdk_event_vhost_scsi.a 00:03:34.247 LIB libspdk_event_iscsi.a 00:03:34.247 SO libspdk_event_vhost_scsi.so.3.0 00:03:34.247 SO libspdk_event_iscsi.so.6.0 00:03:34.247 SYMLINK libspdk_event_vhost_scsi.so 00:03:34.247 SYMLINK libspdk_event_iscsi.so 00:03:34.505 SO libspdk.so.6.0 00:03:34.505 SYMLINK libspdk.so 00:03:34.765 CXX app/trace/trace.o 00:03:34.765 CC test/rpc_client/rpc_client_test.o 00:03:34.765 TEST_HEADER include/spdk/accel.h 00:03:34.765 TEST_HEADER include/spdk/accel_module.h 00:03:34.765 TEST_HEADER include/spdk/assert.h 00:03:34.765 CC app/trace_record/trace_record.o 00:03:34.765 TEST_HEADER include/spdk/barrier.h 00:03:34.765 TEST_HEADER include/spdk/base64.h 00:03:34.765 TEST_HEADER include/spdk/bdev.h 00:03:34.765 TEST_HEADER include/spdk/bdev_module.h 00:03:34.765 TEST_HEADER include/spdk/bdev_zone.h 00:03:34.765 TEST_HEADER include/spdk/bit_array.h 00:03:34.765 TEST_HEADER include/spdk/bit_pool.h 00:03:34.765 TEST_HEADER include/spdk/blob_bdev.h 00:03:34.765 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:34.765 TEST_HEADER include/spdk/blobfs.h 00:03:34.765 TEST_HEADER include/spdk/blob.h 00:03:34.765 TEST_HEADER include/spdk/conf.h 00:03:34.765 TEST_HEADER include/spdk/config.h 00:03:34.765 TEST_HEADER include/spdk/cpuset.h 00:03:34.765 TEST_HEADER include/spdk/crc16.h 00:03:34.765 TEST_HEADER include/spdk/crc32.h 00:03:34.765 TEST_HEADER include/spdk/crc64.h 00:03:34.765 TEST_HEADER include/spdk/dif.h 00:03:34.765 TEST_HEADER include/spdk/dma.h 00:03:34.765 TEST_HEADER include/spdk/endian.h 00:03:34.765 TEST_HEADER include/spdk/env_dpdk.h 00:03:34.765 TEST_HEADER include/spdk/env.h 00:03:34.765 TEST_HEADER include/spdk/event.h 00:03:34.765 TEST_HEADER include/spdk/fd_group.h 00:03:34.765 CC app/nvmf_tgt/nvmf_main.o 00:03:34.765 TEST_HEADER include/spdk/fd.h 00:03:34.765 TEST_HEADER include/spdk/file.h 00:03:34.765 TEST_HEADER include/spdk/fsdev.h 00:03:34.765 TEST_HEADER include/spdk/fsdev_module.h 00:03:34.765 TEST_HEADER include/spdk/ftl.h 00:03:34.765 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:34.765 TEST_HEADER include/spdk/gpt_spec.h 00:03:34.765 TEST_HEADER include/spdk/hexlify.h 00:03:34.765 TEST_HEADER include/spdk/histogram_data.h 00:03:34.765 TEST_HEADER include/spdk/idxd.h 00:03:34.765 TEST_HEADER include/spdk/idxd_spec.h 00:03:34.765 TEST_HEADER include/spdk/init.h 00:03:35.025 TEST_HEADER include/spdk/ioat.h 00:03:35.025 TEST_HEADER include/spdk/ioat_spec.h 00:03:35.025 TEST_HEADER include/spdk/iscsi_spec.h 00:03:35.025 TEST_HEADER include/spdk/json.h 00:03:35.025 TEST_HEADER include/spdk/jsonrpc.h 00:03:35.025 TEST_HEADER include/spdk/keyring.h 00:03:35.025 TEST_HEADER include/spdk/keyring_module.h 00:03:35.025 CC examples/util/zipf/zipf.o 00:03:35.025 CC test/thread/poller_perf/poller_perf.o 00:03:35.025 TEST_HEADER include/spdk/likely.h 00:03:35.025 TEST_HEADER include/spdk/log.h 00:03:35.025 TEST_HEADER include/spdk/lvol.h 00:03:35.025 TEST_HEADER include/spdk/md5.h 00:03:35.025 TEST_HEADER include/spdk/memory.h 00:03:35.025 TEST_HEADER include/spdk/mmio.h 00:03:35.025 TEST_HEADER include/spdk/nbd.h 00:03:35.025 TEST_HEADER include/spdk/net.h 00:03:35.025 TEST_HEADER include/spdk/notify.h 00:03:35.025 TEST_HEADER include/spdk/nvme.h 00:03:35.025 TEST_HEADER include/spdk/nvme_intel.h 00:03:35.025 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:35.025 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:35.025 TEST_HEADER include/spdk/nvme_spec.h 00:03:35.025 TEST_HEADER include/spdk/nvme_zns.h 00:03:35.025 CC test/dma/test_dma/test_dma.o 00:03:35.025 CC test/app/bdev_svc/bdev_svc.o 00:03:35.025 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:35.025 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:35.025 TEST_HEADER include/spdk/nvmf.h 00:03:35.025 TEST_HEADER include/spdk/nvmf_spec.h 00:03:35.025 TEST_HEADER include/spdk/nvmf_transport.h 00:03:35.025 TEST_HEADER include/spdk/opal.h 00:03:35.025 TEST_HEADER include/spdk/opal_spec.h 00:03:35.025 TEST_HEADER include/spdk/pci_ids.h 00:03:35.025 TEST_HEADER include/spdk/pipe.h 00:03:35.025 TEST_HEADER include/spdk/queue.h 00:03:35.025 TEST_HEADER include/spdk/reduce.h 00:03:35.025 CC test/env/mem_callbacks/mem_callbacks.o 00:03:35.025 TEST_HEADER include/spdk/rpc.h 00:03:35.025 TEST_HEADER include/spdk/scheduler.h 00:03:35.025 TEST_HEADER include/spdk/scsi.h 00:03:35.025 TEST_HEADER include/spdk/scsi_spec.h 00:03:35.025 TEST_HEADER include/spdk/sock.h 00:03:35.025 TEST_HEADER include/spdk/stdinc.h 00:03:35.025 TEST_HEADER include/spdk/string.h 00:03:35.025 TEST_HEADER include/spdk/thread.h 00:03:35.025 TEST_HEADER include/spdk/trace.h 00:03:35.025 TEST_HEADER include/spdk/trace_parser.h 00:03:35.025 TEST_HEADER include/spdk/tree.h 00:03:35.025 TEST_HEADER include/spdk/ublk.h 00:03:35.025 TEST_HEADER include/spdk/util.h 00:03:35.025 TEST_HEADER include/spdk/uuid.h 00:03:35.025 TEST_HEADER include/spdk/version.h 00:03:35.025 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:35.025 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:35.025 TEST_HEADER include/spdk/vhost.h 00:03:35.025 TEST_HEADER include/spdk/vmd.h 00:03:35.025 TEST_HEADER include/spdk/xor.h 00:03:35.025 TEST_HEADER include/spdk/zipf.h 00:03:35.025 CXX test/cpp_headers/accel.o 00:03:35.025 LINK rpc_client_test 00:03:35.025 LINK nvmf_tgt 00:03:35.025 LINK zipf 00:03:35.025 LINK poller_perf 00:03:35.025 LINK spdk_trace_record 00:03:35.025 LINK bdev_svc 00:03:35.303 CXX test/cpp_headers/accel_module.o 00:03:35.303 CXX test/cpp_headers/assert.o 00:03:35.303 LINK spdk_trace 00:03:35.303 CXX test/cpp_headers/barrier.o 00:03:35.303 CXX test/cpp_headers/base64.o 00:03:35.303 CXX test/cpp_headers/bdev.o 00:03:35.303 CC examples/ioat/perf/perf.o 00:03:35.303 CC test/env/vtophys/vtophys.o 00:03:35.562 LINK test_dma 00:03:35.562 CXX test/cpp_headers/bdev_module.o 00:03:35.562 CC examples/ioat/verify/verify.o 00:03:35.562 CC examples/vmd/lsvmd/lsvmd.o 00:03:35.562 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:35.562 LINK mem_callbacks 00:03:35.562 CC app/iscsi_tgt/iscsi_tgt.o 00:03:35.562 LINK vtophys 00:03:35.562 CC test/event/event_perf/event_perf.o 00:03:35.562 LINK lsvmd 00:03:35.562 LINK ioat_perf 00:03:35.820 CXX test/cpp_headers/bdev_zone.o 00:03:35.820 CXX test/cpp_headers/bit_array.o 00:03:35.820 LINK verify 00:03:35.820 LINK event_perf 00:03:35.820 LINK iscsi_tgt 00:03:35.820 CXX test/cpp_headers/bit_pool.o 00:03:35.820 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:35.820 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:35.820 CC examples/vmd/led/led.o 00:03:36.079 CXX test/cpp_headers/blob_bdev.o 00:03:36.079 CC test/event/reactor/reactor.o 00:03:36.079 CC test/event/reactor_perf/reactor_perf.o 00:03:36.079 LINK nvme_fuzz 00:03:36.079 LINK env_dpdk_post_init 00:03:36.079 CC test/event/app_repeat/app_repeat.o 00:03:36.079 LINK led 00:03:36.079 CC test/event/scheduler/scheduler.o 00:03:36.079 LINK reactor 00:03:36.079 LINK reactor_perf 00:03:36.079 CXX test/cpp_headers/blobfs_bdev.o 00:03:36.079 LINK app_repeat 00:03:36.079 CC app/spdk_tgt/spdk_tgt.o 00:03:36.079 CXX test/cpp_headers/blobfs.o 00:03:36.337 CC test/env/memory/memory_ut.o 00:03:36.337 CXX test/cpp_headers/blob.o 00:03:36.337 LINK scheduler 00:03:36.337 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:36.337 LINK spdk_tgt 00:03:36.337 CC examples/idxd/perf/perf.o 00:03:36.337 CXX test/cpp_headers/conf.o 00:03:36.595 CXX test/cpp_headers/config.o 00:03:36.595 CC examples/thread/thread/thread_ex.o 00:03:36.595 CC test/accel/dif/dif.o 00:03:36.595 CC test/blobfs/mkfs/mkfs.o 00:03:36.596 CXX test/cpp_headers/cpuset.o 00:03:36.596 LINK interrupt_tgt 00:03:36.596 CC app/spdk_lspci/spdk_lspci.o 00:03:36.855 CC test/env/pci/pci_ut.o 00:03:36.855 CXX test/cpp_headers/crc16.o 00:03:36.855 LINK idxd_perf 00:03:36.855 LINK mkfs 00:03:36.855 LINK thread 00:03:36.855 LINK spdk_lspci 00:03:36.855 CXX test/cpp_headers/crc32.o 00:03:37.115 CC test/lvol/esnap/esnap.o 00:03:37.115 CC app/spdk_nvme_perf/perf.o 00:03:37.115 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:37.115 CXX test/cpp_headers/crc64.o 00:03:37.115 CC test/nvme/aer/aer.o 00:03:37.115 LINK pci_ut 00:03:37.115 CC examples/sock/hello_world/hello_sock.o 00:03:37.375 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:37.375 CXX test/cpp_headers/dif.o 00:03:37.375 LINK dif 00:03:37.375 LINK aer 00:03:37.375 CXX test/cpp_headers/dma.o 00:03:37.375 LINK hello_sock 00:03:37.375 LINK memory_ut 00:03:37.636 CXX test/cpp_headers/endian.o 00:03:37.636 CC examples/accel/perf/accel_perf.o 00:03:37.636 LINK vhost_fuzz 00:03:37.636 CC test/nvme/reset/reset.o 00:03:37.636 CC test/nvme/sgl/sgl.o 00:03:37.636 LINK iscsi_fuzz 00:03:37.636 CC examples/blob/hello_world/hello_blob.o 00:03:37.636 CXX test/cpp_headers/env_dpdk.o 00:03:37.636 CC test/nvme/e2edp/nvme_dp.o 00:03:37.896 CXX test/cpp_headers/env.o 00:03:37.896 LINK reset 00:03:37.896 LINK hello_blob 00:03:37.896 CC test/bdev/bdevio/bdevio.o 00:03:37.896 LINK sgl 00:03:37.896 CC test/app/histogram_perf/histogram_perf.o 00:03:37.896 LINK spdk_nvme_perf 00:03:38.157 CXX test/cpp_headers/event.o 00:03:38.157 LINK nvme_dp 00:03:38.157 CXX test/cpp_headers/fd_group.o 00:03:38.157 LINK accel_perf 00:03:38.157 LINK histogram_perf 00:03:38.157 CXX test/cpp_headers/fd.o 00:03:38.157 CXX test/cpp_headers/file.o 00:03:38.157 CXX test/cpp_headers/fsdev.o 00:03:38.157 CC examples/blob/cli/blobcli.o 00:03:38.157 CC test/nvme/overhead/overhead.o 00:03:38.157 CC app/spdk_nvme_identify/identify.o 00:03:38.417 CC test/nvme/err_injection/err_injection.o 00:03:38.417 CC test/app/stub/stub.o 00:03:38.417 CC test/app/jsoncat/jsoncat.o 00:03:38.417 LINK bdevio 00:03:38.417 CXX test/cpp_headers/fsdev_module.o 00:03:38.417 LINK jsoncat 00:03:38.417 LINK stub 00:03:38.417 LINK err_injection 00:03:38.417 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:38.678 CXX test/cpp_headers/ftl.o 00:03:38.678 LINK overhead 00:03:38.678 CXX test/cpp_headers/fuse_dispatcher.o 00:03:38.678 CC test/nvme/startup/startup.o 00:03:38.678 LINK blobcli 00:03:38.678 CC examples/bdev/hello_world/hello_bdev.o 00:03:38.678 CC app/spdk_nvme_discover/discovery_aer.o 00:03:38.678 CC examples/nvme/hello_world/hello_world.o 00:03:38.678 CC examples/nvme/reconnect/reconnect.o 00:03:38.937 LINK hello_fsdev 00:03:38.937 CXX test/cpp_headers/gpt_spec.o 00:03:38.937 LINK startup 00:03:38.937 LINK spdk_nvme_discover 00:03:38.937 LINK hello_bdev 00:03:38.937 LINK hello_world 00:03:38.937 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:38.937 CXX test/cpp_headers/hexlify.o 00:03:39.197 CC examples/nvme/arbitration/arbitration.o 00:03:39.197 CC test/nvme/reserve/reserve.o 00:03:39.197 CXX test/cpp_headers/histogram_data.o 00:03:39.197 LINK reconnect 00:03:39.197 LINK spdk_nvme_identify 00:03:39.197 CC test/nvme/simple_copy/simple_copy.o 00:03:39.197 CC app/spdk_top/spdk_top.o 00:03:39.197 CC examples/bdev/bdevperf/bdevperf.o 00:03:39.197 CXX test/cpp_headers/idxd.o 00:03:39.456 LINK reserve 00:03:39.456 CC examples/nvme/hotplug/hotplug.o 00:03:39.456 LINK arbitration 00:03:39.456 LINK simple_copy 00:03:39.456 CXX test/cpp_headers/idxd_spec.o 00:03:39.456 CC app/vhost/vhost.o 00:03:39.456 CXX test/cpp_headers/init.o 00:03:39.456 LINK nvme_manage 00:03:39.716 LINK hotplug 00:03:39.716 LINK vhost 00:03:39.716 CXX test/cpp_headers/ioat.o 00:03:39.716 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:39.716 CC test/nvme/connect_stress/connect_stress.o 00:03:39.716 CC app/spdk_dd/spdk_dd.o 00:03:39.716 CXX test/cpp_headers/ioat_spec.o 00:03:39.716 CC app/fio/nvme/fio_plugin.o 00:03:39.975 LINK cmb_copy 00:03:39.975 CXX test/cpp_headers/iscsi_spec.o 00:03:39.975 LINK connect_stress 00:03:39.975 CC app/fio/bdev/fio_plugin.o 00:03:39.975 CXX test/cpp_headers/json.o 00:03:39.975 CC test/nvme/boot_partition/boot_partition.o 00:03:40.235 CC examples/nvme/abort/abort.o 00:03:40.235 LINK spdk_dd 00:03:40.235 CC test/nvme/compliance/nvme_compliance.o 00:03:40.235 CXX test/cpp_headers/jsonrpc.o 00:03:40.235 LINK bdevperf 00:03:40.235 LINK boot_partition 00:03:40.235 LINK spdk_top 00:03:40.235 CXX test/cpp_headers/keyring.o 00:03:40.494 CC test/nvme/fused_ordering/fused_ordering.o 00:03:40.494 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:40.494 LINK spdk_bdev 00:03:40.494 LINK spdk_nvme 00:03:40.494 CC test/nvme/fdp/fdp.o 00:03:40.494 CXX test/cpp_headers/keyring_module.o 00:03:40.494 CC test/nvme/cuse/cuse.o 00:03:40.494 LINK abort 00:03:40.494 LINK nvme_compliance 00:03:40.494 CXX test/cpp_headers/likely.o 00:03:40.494 CXX test/cpp_headers/log.o 00:03:40.494 LINK fused_ordering 00:03:40.755 LINK doorbell_aers 00:03:40.755 CXX test/cpp_headers/lvol.o 00:03:40.755 CXX test/cpp_headers/md5.o 00:03:40.755 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:40.755 CXX test/cpp_headers/memory.o 00:03:40.755 CXX test/cpp_headers/mmio.o 00:03:40.755 CXX test/cpp_headers/nbd.o 00:03:40.755 CXX test/cpp_headers/net.o 00:03:40.755 CXX test/cpp_headers/notify.o 00:03:40.755 CXX test/cpp_headers/nvme.o 00:03:40.755 LINK fdp 00:03:40.755 CXX test/cpp_headers/nvme_intel.o 00:03:40.755 CXX test/cpp_headers/nvme_ocssd.o 00:03:40.755 LINK pmr_persistence 00:03:41.014 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:41.014 CXX test/cpp_headers/nvme_spec.o 00:03:41.014 CXX test/cpp_headers/nvme_zns.o 00:03:41.014 CXX test/cpp_headers/nvmf_cmd.o 00:03:41.014 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:41.014 CXX test/cpp_headers/nvmf.o 00:03:41.014 CXX test/cpp_headers/nvmf_spec.o 00:03:41.014 CXX test/cpp_headers/nvmf_transport.o 00:03:41.014 CXX test/cpp_headers/opal.o 00:03:41.014 CXX test/cpp_headers/opal_spec.o 00:03:41.273 CXX test/cpp_headers/pci_ids.o 00:03:41.273 CXX test/cpp_headers/pipe.o 00:03:41.273 CXX test/cpp_headers/queue.o 00:03:41.273 CXX test/cpp_headers/reduce.o 00:03:41.273 CC examples/nvmf/nvmf/nvmf.o 00:03:41.273 CXX test/cpp_headers/rpc.o 00:03:41.273 CXX test/cpp_headers/scheduler.o 00:03:41.273 CXX test/cpp_headers/scsi.o 00:03:41.273 CXX test/cpp_headers/scsi_spec.o 00:03:41.273 CXX test/cpp_headers/sock.o 00:03:41.273 CXX test/cpp_headers/stdinc.o 00:03:41.273 CXX test/cpp_headers/string.o 00:03:41.273 CXX test/cpp_headers/thread.o 00:03:41.273 CXX test/cpp_headers/trace.o 00:03:41.533 CXX test/cpp_headers/trace_parser.o 00:03:41.533 CXX test/cpp_headers/tree.o 00:03:41.533 CXX test/cpp_headers/ublk.o 00:03:41.533 CXX test/cpp_headers/util.o 00:03:41.533 CXX test/cpp_headers/uuid.o 00:03:41.533 LINK nvmf 00:03:41.533 CXX test/cpp_headers/version.o 00:03:41.533 CXX test/cpp_headers/vfio_user_pci.o 00:03:41.533 CXX test/cpp_headers/vfio_user_spec.o 00:03:41.533 CXX test/cpp_headers/vhost.o 00:03:41.533 CXX test/cpp_headers/vmd.o 00:03:41.533 CXX test/cpp_headers/xor.o 00:03:41.533 CXX test/cpp_headers/zipf.o 00:03:41.795 LINK cuse 00:03:42.736 LINK esnap 00:03:42.996 00:03:42.996 real 1m29.814s 00:03:42.996 user 7m53.551s 00:03:42.996 sys 1m41.935s 00:03:42.996 11:55:46 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:42.996 11:55:46 make -- common/autotest_common.sh@10 -- $ set +x 00:03:42.996 ************************************ 00:03:42.996 END TEST make 00:03:42.996 ************************************ 00:03:43.256 11:55:46 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:43.256 11:55:46 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:43.256 11:55:46 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:43.256 11:55:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:43.256 11:55:46 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:43.256 11:55:46 -- pm/common@44 -- $ pid=5477 00:03:43.256 11:55:46 -- pm/common@50 -- $ kill -TERM 5477 00:03:43.256 11:55:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:43.256 11:55:46 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:43.256 11:55:46 -- pm/common@44 -- $ pid=5479 00:03:43.256 11:55:46 -- pm/common@50 -- $ kill -TERM 5479 00:03:43.256 11:55:46 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:43.256 11:55:46 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:43.256 11:55:46 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:43.257 11:55:46 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:43.257 11:55:46 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:43.257 11:55:46 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:43.257 11:55:46 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:43.257 11:55:46 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:43.257 11:55:46 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:43.257 11:55:46 -- scripts/common.sh@336 -- # IFS=.-: 00:03:43.257 11:55:46 -- scripts/common.sh@336 -- # read -ra ver1 00:03:43.257 11:55:46 -- scripts/common.sh@337 -- # IFS=.-: 00:03:43.257 11:55:46 -- scripts/common.sh@337 -- # read -ra ver2 00:03:43.257 11:55:46 -- scripts/common.sh@338 -- # local 'op=<' 00:03:43.257 11:55:46 -- scripts/common.sh@340 -- # ver1_l=2 00:03:43.257 11:55:46 -- scripts/common.sh@341 -- # ver2_l=1 00:03:43.257 11:55:46 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:43.257 11:55:46 -- scripts/common.sh@344 -- # case "$op" in 00:03:43.257 11:55:46 -- scripts/common.sh@345 -- # : 1 00:03:43.257 11:55:46 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:43.257 11:55:46 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:43.257 11:55:46 -- scripts/common.sh@365 -- # decimal 1 00:03:43.257 11:55:46 -- scripts/common.sh@353 -- # local d=1 00:03:43.257 11:55:46 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:43.257 11:55:46 -- scripts/common.sh@355 -- # echo 1 00:03:43.257 11:55:46 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:43.257 11:55:46 -- scripts/common.sh@366 -- # decimal 2 00:03:43.257 11:55:46 -- scripts/common.sh@353 -- # local d=2 00:03:43.257 11:55:46 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:43.257 11:55:46 -- scripts/common.sh@355 -- # echo 2 00:03:43.257 11:55:46 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:43.257 11:55:46 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:43.257 11:55:46 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:43.257 11:55:46 -- scripts/common.sh@368 -- # return 0 00:03:43.257 11:55:46 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:43.257 11:55:46 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:43.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:43.257 --rc genhtml_branch_coverage=1 00:03:43.257 --rc genhtml_function_coverage=1 00:03:43.257 --rc genhtml_legend=1 00:03:43.257 --rc geninfo_all_blocks=1 00:03:43.257 --rc geninfo_unexecuted_blocks=1 00:03:43.257 00:03:43.257 ' 00:03:43.257 11:55:46 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:43.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:43.257 --rc genhtml_branch_coverage=1 00:03:43.257 --rc genhtml_function_coverage=1 00:03:43.257 --rc genhtml_legend=1 00:03:43.257 --rc geninfo_all_blocks=1 00:03:43.257 --rc geninfo_unexecuted_blocks=1 00:03:43.257 00:03:43.257 ' 00:03:43.257 11:55:46 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:43.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:43.257 --rc genhtml_branch_coverage=1 00:03:43.257 --rc genhtml_function_coverage=1 00:03:43.257 --rc genhtml_legend=1 00:03:43.257 --rc geninfo_all_blocks=1 00:03:43.257 --rc geninfo_unexecuted_blocks=1 00:03:43.257 00:03:43.257 ' 00:03:43.257 11:55:46 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:43.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:43.257 --rc genhtml_branch_coverage=1 00:03:43.257 --rc genhtml_function_coverage=1 00:03:43.257 --rc genhtml_legend=1 00:03:43.257 --rc geninfo_all_blocks=1 00:03:43.257 --rc geninfo_unexecuted_blocks=1 00:03:43.257 00:03:43.257 ' 00:03:43.257 11:55:46 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:43.257 11:55:46 -- nvmf/common.sh@7 -- # uname -s 00:03:43.257 11:55:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:43.257 11:55:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:43.257 11:55:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:43.257 11:55:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:43.257 11:55:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:43.257 11:55:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:43.257 11:55:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:43.257 11:55:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:43.257 11:55:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:43.518 11:55:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:43.518 11:55:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fc16503-8259-4497-8462-7e6e8faaef14 00:03:43.518 11:55:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=2fc16503-8259-4497-8462-7e6e8faaef14 00:03:43.518 11:55:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:43.518 11:55:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:43.518 11:55:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:43.518 11:55:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:43.518 11:55:46 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:43.518 11:55:46 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:43.518 11:55:46 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:43.518 11:55:46 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:43.518 11:55:46 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:43.518 11:55:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:43.518 11:55:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:43.518 11:55:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:43.518 11:55:46 -- paths/export.sh@5 -- # export PATH 00:03:43.518 11:55:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:43.518 11:55:46 -- nvmf/common.sh@51 -- # : 0 00:03:43.518 11:55:46 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:43.518 11:55:46 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:43.518 11:55:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:43.518 11:55:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:43.518 11:55:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:43.518 11:55:46 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:43.518 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:43.518 11:55:46 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:43.518 11:55:46 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:43.518 11:55:46 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:43.518 11:55:46 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:43.518 11:55:46 -- spdk/autotest.sh@32 -- # uname -s 00:03:43.518 11:55:46 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:43.518 11:55:46 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:43.518 11:55:46 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:43.518 11:55:46 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:43.518 11:55:46 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:43.518 11:55:46 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:43.518 11:55:46 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:43.518 11:55:46 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:43.518 11:55:46 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:43.518 11:55:46 -- spdk/autotest.sh@48 -- # udevadm_pid=54524 00:03:43.518 11:55:46 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:43.518 11:55:46 -- pm/common@17 -- # local monitor 00:03:43.518 11:55:46 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:43.518 11:55:46 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:43.518 11:55:46 -- pm/common@21 -- # date +%s 00:03:43.518 11:55:46 -- pm/common@25 -- # sleep 1 00:03:43.518 11:55:46 -- pm/common@21 -- # date +%s 00:03:43.518 11:55:46 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732017346 00:03:43.518 11:55:46 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732017346 00:03:43.518 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732017346_collect-vmstat.pm.log 00:03:43.518 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732017346_collect-cpu-load.pm.log 00:03:44.458 11:55:47 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:44.458 11:55:47 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:44.458 11:55:47 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:44.458 11:55:47 -- common/autotest_common.sh@10 -- # set +x 00:03:44.458 11:55:47 -- spdk/autotest.sh@59 -- # create_test_list 00:03:44.458 11:55:47 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:44.458 11:55:47 -- common/autotest_common.sh@10 -- # set +x 00:03:44.458 11:55:47 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:44.458 11:55:47 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:44.458 11:55:47 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:44.458 11:55:47 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:44.458 11:55:47 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:44.458 11:55:47 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:44.458 11:55:47 -- common/autotest_common.sh@1457 -- # uname 00:03:44.458 11:55:47 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:44.458 11:55:47 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:44.458 11:55:47 -- common/autotest_common.sh@1477 -- # uname 00:03:44.718 11:55:47 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:44.718 11:55:47 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:44.718 11:55:47 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:44.718 lcov: LCOV version 1.15 00:03:44.718 11:55:47 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:59.612 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:59.612 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:14.517 11:56:16 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:14.517 11:56:16 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:14.517 11:56:16 -- common/autotest_common.sh@10 -- # set +x 00:04:14.517 11:56:16 -- spdk/autotest.sh@78 -- # rm -f 00:04:14.517 11:56:16 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:14.517 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:14.517 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:14.517 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:14.517 11:56:17 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:14.517 11:56:17 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:14.517 11:56:17 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:14.517 11:56:17 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:04:14.517 11:56:17 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:14.517 11:56:17 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:04:14.517 11:56:17 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:14.517 11:56:17 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:14.517 11:56:17 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:14.517 11:56:17 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:14.517 11:56:17 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:04:14.517 11:56:17 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:14.517 11:56:17 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:14.517 11:56:17 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:14.517 11:56:17 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:14.517 11:56:17 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:04:14.517 11:56:17 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:04:14.517 11:56:17 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:14.517 11:56:17 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:14.517 11:56:17 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:14.517 11:56:17 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:04:14.517 11:56:17 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:04:14.517 11:56:17 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:14.517 11:56:17 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:14.517 11:56:17 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:14.517 11:56:17 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:14.517 11:56:17 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:14.517 11:56:17 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:14.517 11:56:17 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:14.517 11:56:17 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:14.517 No valid GPT data, bailing 00:04:14.517 11:56:17 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:14.517 11:56:17 -- scripts/common.sh@394 -- # pt= 00:04:14.517 11:56:17 -- scripts/common.sh@395 -- # return 1 00:04:14.517 11:56:17 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:14.517 1+0 records in 00:04:14.517 1+0 records out 00:04:14.517 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0060832 s, 172 MB/s 00:04:14.517 11:56:17 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:14.517 11:56:17 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:14.517 11:56:17 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:14.517 11:56:17 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:14.517 11:56:17 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:14.778 No valid GPT data, bailing 00:04:14.778 11:56:17 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:14.778 11:56:17 -- scripts/common.sh@394 -- # pt= 00:04:14.778 11:56:17 -- scripts/common.sh@395 -- # return 1 00:04:14.778 11:56:17 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:14.778 1+0 records in 00:04:14.778 1+0 records out 00:04:14.778 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00647754 s, 162 MB/s 00:04:14.778 11:56:17 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:14.778 11:56:17 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:14.778 11:56:17 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:04:14.778 11:56:17 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:04:14.778 11:56:17 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:14.778 No valid GPT data, bailing 00:04:14.778 11:56:18 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:14.778 11:56:18 -- scripts/common.sh@394 -- # pt= 00:04:14.778 11:56:18 -- scripts/common.sh@395 -- # return 1 00:04:14.778 11:56:18 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:14.778 1+0 records in 00:04:14.778 1+0 records out 00:04:14.778 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00662229 s, 158 MB/s 00:04:14.778 11:56:18 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:14.778 11:56:18 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:14.778 11:56:18 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:04:14.778 11:56:18 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:04:14.778 11:56:18 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:14.778 No valid GPT data, bailing 00:04:14.778 11:56:18 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:14.778 11:56:18 -- scripts/common.sh@394 -- # pt= 00:04:14.778 11:56:18 -- scripts/common.sh@395 -- # return 1 00:04:14.778 11:56:18 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:14.778 1+0 records in 00:04:14.778 1+0 records out 00:04:14.778 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00591877 s, 177 MB/s 00:04:14.778 11:56:18 -- spdk/autotest.sh@105 -- # sync 00:04:15.037 11:56:18 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:15.037 11:56:18 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:15.038 11:56:18 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:18.350 11:56:21 -- spdk/autotest.sh@111 -- # uname -s 00:04:18.350 11:56:21 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:18.350 11:56:21 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:18.350 11:56:21 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:18.610 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:18.610 Hugepages 00:04:18.610 node hugesize free / total 00:04:18.610 node0 1048576kB 0 / 0 00:04:18.610 node0 2048kB 0 / 0 00:04:18.610 00:04:18.610 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:18.871 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:18.871 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:19.132 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:19.132 11:56:22 -- spdk/autotest.sh@117 -- # uname -s 00:04:19.132 11:56:22 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:19.132 11:56:22 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:19.132 11:56:22 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:19.703 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:19.963 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:19.963 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:19.963 11:56:23 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:21.346 11:56:24 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:21.346 11:56:24 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:21.346 11:56:24 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:21.346 11:56:24 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:21.346 11:56:24 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:21.346 11:56:24 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:21.346 11:56:24 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:21.346 11:56:24 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:21.346 11:56:24 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:21.346 11:56:24 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:21.346 11:56:24 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:21.346 11:56:24 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:21.606 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:21.606 Waiting for block devices as requested 00:04:21.606 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:21.866 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:21.866 11:56:25 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:21.866 11:56:25 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:21.866 11:56:25 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:21.866 11:56:25 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:21.866 11:56:25 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:21.866 11:56:25 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:21.866 11:56:25 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:21.866 11:56:25 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:21.866 11:56:25 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:21.866 11:56:25 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:21.866 11:56:25 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:21.866 11:56:25 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:21.866 11:56:25 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:21.866 11:56:25 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:21.866 11:56:25 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:21.866 11:56:25 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:21.866 11:56:25 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:21.866 11:56:25 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:21.866 11:56:25 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:21.866 11:56:25 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:21.866 11:56:25 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:21.866 11:56:25 -- common/autotest_common.sh@1543 -- # continue 00:04:21.866 11:56:25 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:21.866 11:56:25 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:21.866 11:56:25 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:21.866 11:56:25 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:21.866 11:56:25 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:21.866 11:56:25 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:21.866 11:56:25 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:21.866 11:56:25 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:21.866 11:56:25 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:21.866 11:56:25 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:21.866 11:56:25 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:21.866 11:56:25 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:21.866 11:56:25 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:21.866 11:56:25 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:21.866 11:56:25 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:21.866 11:56:25 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:21.866 11:56:25 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:21.866 11:56:25 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:21.866 11:56:25 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:21.866 11:56:25 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:21.866 11:56:25 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:21.866 11:56:25 -- common/autotest_common.sh@1543 -- # continue 00:04:21.866 11:56:25 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:21.866 11:56:25 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:21.866 11:56:25 -- common/autotest_common.sh@10 -- # set +x 00:04:22.126 11:56:25 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:22.127 11:56:25 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:22.127 11:56:25 -- common/autotest_common.sh@10 -- # set +x 00:04:22.127 11:56:25 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:22.696 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:22.956 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:22.956 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:22.956 11:56:26 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:22.956 11:56:26 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:22.956 11:56:26 -- common/autotest_common.sh@10 -- # set +x 00:04:22.956 11:56:26 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:22.956 11:56:26 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:22.956 11:56:26 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:22.956 11:56:26 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:22.956 11:56:26 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:22.956 11:56:26 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:22.956 11:56:26 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:22.956 11:56:26 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:22.956 11:56:26 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:22.956 11:56:26 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:22.956 11:56:26 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:22.956 11:56:26 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:22.956 11:56:26 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:23.216 11:56:26 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:23.216 11:56:26 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:23.216 11:56:26 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:23.216 11:56:26 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:23.216 11:56:26 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:23.216 11:56:26 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:23.216 11:56:26 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:23.216 11:56:26 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:23.216 11:56:26 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:23.217 11:56:26 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:23.217 11:56:26 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:23.217 11:56:26 -- common/autotest_common.sh@1572 -- # return 0 00:04:23.217 11:56:26 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:23.217 11:56:26 -- common/autotest_common.sh@1580 -- # return 0 00:04:23.217 11:56:26 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:23.217 11:56:26 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:23.217 11:56:26 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:23.217 11:56:26 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:23.217 11:56:26 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:23.217 11:56:26 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:23.217 11:56:26 -- common/autotest_common.sh@10 -- # set +x 00:04:23.217 11:56:26 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:23.217 11:56:26 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:23.217 11:56:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:23.217 11:56:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:23.217 11:56:26 -- common/autotest_common.sh@10 -- # set +x 00:04:23.217 ************************************ 00:04:23.217 START TEST env 00:04:23.217 ************************************ 00:04:23.217 11:56:26 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:23.217 * Looking for test storage... 00:04:23.217 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:23.217 11:56:26 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:23.217 11:56:26 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:23.217 11:56:26 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:23.477 11:56:26 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:23.477 11:56:26 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:23.477 11:56:26 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:23.477 11:56:26 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:23.477 11:56:26 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:23.477 11:56:26 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:23.477 11:56:26 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:23.477 11:56:26 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:23.477 11:56:26 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:23.477 11:56:26 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:23.477 11:56:26 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:23.477 11:56:26 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:23.477 11:56:26 env -- scripts/common.sh@344 -- # case "$op" in 00:04:23.477 11:56:26 env -- scripts/common.sh@345 -- # : 1 00:04:23.477 11:56:26 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:23.477 11:56:26 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:23.477 11:56:26 env -- scripts/common.sh@365 -- # decimal 1 00:04:23.477 11:56:26 env -- scripts/common.sh@353 -- # local d=1 00:04:23.477 11:56:26 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:23.477 11:56:26 env -- scripts/common.sh@355 -- # echo 1 00:04:23.477 11:56:26 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:23.477 11:56:26 env -- scripts/common.sh@366 -- # decimal 2 00:04:23.477 11:56:26 env -- scripts/common.sh@353 -- # local d=2 00:04:23.477 11:56:26 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:23.477 11:56:26 env -- scripts/common.sh@355 -- # echo 2 00:04:23.477 11:56:26 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:23.477 11:56:26 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:23.477 11:56:26 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:23.477 11:56:26 env -- scripts/common.sh@368 -- # return 0 00:04:23.477 11:56:26 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:23.477 11:56:26 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:23.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.477 --rc genhtml_branch_coverage=1 00:04:23.477 --rc genhtml_function_coverage=1 00:04:23.477 --rc genhtml_legend=1 00:04:23.477 --rc geninfo_all_blocks=1 00:04:23.477 --rc geninfo_unexecuted_blocks=1 00:04:23.477 00:04:23.477 ' 00:04:23.477 11:56:26 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:23.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.477 --rc genhtml_branch_coverage=1 00:04:23.477 --rc genhtml_function_coverage=1 00:04:23.477 --rc genhtml_legend=1 00:04:23.477 --rc geninfo_all_blocks=1 00:04:23.477 --rc geninfo_unexecuted_blocks=1 00:04:23.477 00:04:23.477 ' 00:04:23.477 11:56:26 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:23.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.477 --rc genhtml_branch_coverage=1 00:04:23.477 --rc genhtml_function_coverage=1 00:04:23.477 --rc genhtml_legend=1 00:04:23.477 --rc geninfo_all_blocks=1 00:04:23.477 --rc geninfo_unexecuted_blocks=1 00:04:23.477 00:04:23.477 ' 00:04:23.477 11:56:26 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:23.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.477 --rc genhtml_branch_coverage=1 00:04:23.477 --rc genhtml_function_coverage=1 00:04:23.477 --rc genhtml_legend=1 00:04:23.477 --rc geninfo_all_blocks=1 00:04:23.477 --rc geninfo_unexecuted_blocks=1 00:04:23.477 00:04:23.477 ' 00:04:23.477 11:56:26 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:23.477 11:56:26 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:23.477 11:56:26 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:23.477 11:56:26 env -- common/autotest_common.sh@10 -- # set +x 00:04:23.477 ************************************ 00:04:23.477 START TEST env_memory 00:04:23.477 ************************************ 00:04:23.477 11:56:26 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:23.477 00:04:23.477 00:04:23.477 CUnit - A unit testing framework for C - Version 2.1-3 00:04:23.477 http://cunit.sourceforge.net/ 00:04:23.477 00:04:23.477 00:04:23.477 Suite: memory 00:04:23.477 Test: alloc and free memory map ...[2024-11-19 11:56:26.752879] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:23.477 passed 00:04:23.477 Test: mem map translation ...[2024-11-19 11:56:26.794592] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:23.477 [2024-11-19 11:56:26.794670] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:23.477 [2024-11-19 11:56:26.794757] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:23.477 [2024-11-19 11:56:26.794832] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:23.477 passed 00:04:23.737 Test: mem map registration ...[2024-11-19 11:56:26.858665] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:23.737 [2024-11-19 11:56:26.858751] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:23.737 passed 00:04:23.737 Test: mem map adjacent registrations ...passed 00:04:23.737 00:04:23.737 Run Summary: Type Total Ran Passed Failed Inactive 00:04:23.737 suites 1 1 n/a 0 0 00:04:23.737 tests 4 4 4 0 0 00:04:23.737 asserts 152 152 152 0 n/a 00:04:23.737 00:04:23.737 Elapsed time = 0.228 seconds 00:04:23.737 00:04:23.737 real 0m0.283s 00:04:23.737 user 0m0.240s 00:04:23.737 sys 0m0.031s 00:04:23.737 11:56:26 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:23.737 ************************************ 00:04:23.737 END TEST env_memory 00:04:23.737 ************************************ 00:04:23.737 11:56:26 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:23.737 11:56:27 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:23.737 11:56:27 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:23.737 11:56:27 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:23.737 11:56:27 env -- common/autotest_common.sh@10 -- # set +x 00:04:23.737 ************************************ 00:04:23.737 START TEST env_vtophys 00:04:23.737 ************************************ 00:04:23.737 11:56:27 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:23.737 EAL: lib.eal log level changed from notice to debug 00:04:23.737 EAL: Detected lcore 0 as core 0 on socket 0 00:04:23.737 EAL: Detected lcore 1 as core 0 on socket 0 00:04:23.737 EAL: Detected lcore 2 as core 0 on socket 0 00:04:23.737 EAL: Detected lcore 3 as core 0 on socket 0 00:04:23.737 EAL: Detected lcore 4 as core 0 on socket 0 00:04:23.737 EAL: Detected lcore 5 as core 0 on socket 0 00:04:23.737 EAL: Detected lcore 6 as core 0 on socket 0 00:04:23.737 EAL: Detected lcore 7 as core 0 on socket 0 00:04:23.737 EAL: Detected lcore 8 as core 0 on socket 0 00:04:23.737 EAL: Detected lcore 9 as core 0 on socket 0 00:04:23.737 EAL: Maximum logical cores by configuration: 128 00:04:23.737 EAL: Detected CPU lcores: 10 00:04:23.737 EAL: Detected NUMA nodes: 1 00:04:23.737 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:23.737 EAL: Detected shared linkage of DPDK 00:04:23.737 EAL: No shared files mode enabled, IPC will be disabled 00:04:23.737 EAL: Selected IOVA mode 'PA' 00:04:23.737 EAL: Probing VFIO support... 00:04:23.737 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:23.737 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:23.737 EAL: Ask a virtual area of 0x2e000 bytes 00:04:23.737 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:23.737 EAL: Setting up physically contiguous memory... 00:04:23.737 EAL: Setting maximum number of open files to 524288 00:04:23.737 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:23.737 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:23.737 EAL: Ask a virtual area of 0x61000 bytes 00:04:23.997 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:23.997 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:23.997 EAL: Ask a virtual area of 0x400000000 bytes 00:04:23.997 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:23.997 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:23.997 EAL: Ask a virtual area of 0x61000 bytes 00:04:23.997 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:23.998 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:23.998 EAL: Ask a virtual area of 0x400000000 bytes 00:04:23.998 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:23.998 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:23.998 EAL: Ask a virtual area of 0x61000 bytes 00:04:23.998 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:23.998 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:23.998 EAL: Ask a virtual area of 0x400000000 bytes 00:04:23.998 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:23.998 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:23.998 EAL: Ask a virtual area of 0x61000 bytes 00:04:23.998 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:23.998 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:23.998 EAL: Ask a virtual area of 0x400000000 bytes 00:04:23.998 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:23.998 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:23.998 EAL: Hugepages will be freed exactly as allocated. 00:04:23.998 EAL: No shared files mode enabled, IPC is disabled 00:04:23.998 EAL: No shared files mode enabled, IPC is disabled 00:04:23.998 EAL: TSC frequency is ~2290000 KHz 00:04:23.998 EAL: Main lcore 0 is ready (tid=7f6f8b340a40;cpuset=[0]) 00:04:23.998 EAL: Trying to obtain current memory policy. 00:04:23.998 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.998 EAL: Restoring previous memory policy: 0 00:04:23.998 EAL: request: mp_malloc_sync 00:04:23.998 EAL: No shared files mode enabled, IPC is disabled 00:04:23.998 EAL: Heap on socket 0 was expanded by 2MB 00:04:23.998 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:23.998 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:23.998 EAL: Mem event callback 'spdk:(nil)' registered 00:04:23.998 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:23.998 00:04:23.998 00:04:23.998 CUnit - A unit testing framework for C - Version 2.1-3 00:04:23.998 http://cunit.sourceforge.net/ 00:04:23.998 00:04:23.998 00:04:23.998 Suite: components_suite 00:04:24.259 Test: vtophys_malloc_test ...passed 00:04:24.259 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:24.259 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:24.259 EAL: Restoring previous memory policy: 4 00:04:24.259 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.259 EAL: request: mp_malloc_sync 00:04:24.259 EAL: No shared files mode enabled, IPC is disabled 00:04:24.259 EAL: Heap on socket 0 was expanded by 4MB 00:04:24.259 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.260 EAL: request: mp_malloc_sync 00:04:24.260 EAL: No shared files mode enabled, IPC is disabled 00:04:24.260 EAL: Heap on socket 0 was shrunk by 4MB 00:04:24.260 EAL: Trying to obtain current memory policy. 00:04:24.260 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:24.260 EAL: Restoring previous memory policy: 4 00:04:24.260 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.260 EAL: request: mp_malloc_sync 00:04:24.260 EAL: No shared files mode enabled, IPC is disabled 00:04:24.260 EAL: Heap on socket 0 was expanded by 6MB 00:04:24.260 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.260 EAL: request: mp_malloc_sync 00:04:24.260 EAL: No shared files mode enabled, IPC is disabled 00:04:24.260 EAL: Heap on socket 0 was shrunk by 6MB 00:04:24.260 EAL: Trying to obtain current memory policy. 00:04:24.260 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:24.260 EAL: Restoring previous memory policy: 4 00:04:24.260 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.260 EAL: request: mp_malloc_sync 00:04:24.260 EAL: No shared files mode enabled, IPC is disabled 00:04:24.260 EAL: Heap on socket 0 was expanded by 10MB 00:04:24.260 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.260 EAL: request: mp_malloc_sync 00:04:24.260 EAL: No shared files mode enabled, IPC is disabled 00:04:24.260 EAL: Heap on socket 0 was shrunk by 10MB 00:04:24.537 EAL: Trying to obtain current memory policy. 00:04:24.537 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:24.537 EAL: Restoring previous memory policy: 4 00:04:24.537 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.537 EAL: request: mp_malloc_sync 00:04:24.537 EAL: No shared files mode enabled, IPC is disabled 00:04:24.537 EAL: Heap on socket 0 was expanded by 18MB 00:04:24.537 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.537 EAL: request: mp_malloc_sync 00:04:24.537 EAL: No shared files mode enabled, IPC is disabled 00:04:24.537 EAL: Heap on socket 0 was shrunk by 18MB 00:04:24.537 EAL: Trying to obtain current memory policy. 00:04:24.537 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:24.537 EAL: Restoring previous memory policy: 4 00:04:24.537 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.537 EAL: request: mp_malloc_sync 00:04:24.537 EAL: No shared files mode enabled, IPC is disabled 00:04:24.537 EAL: Heap on socket 0 was expanded by 34MB 00:04:24.537 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.537 EAL: request: mp_malloc_sync 00:04:24.537 EAL: No shared files mode enabled, IPC is disabled 00:04:24.537 EAL: Heap on socket 0 was shrunk by 34MB 00:04:24.537 EAL: Trying to obtain current memory policy. 00:04:24.537 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:24.537 EAL: Restoring previous memory policy: 4 00:04:24.537 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.537 EAL: request: mp_malloc_sync 00:04:24.537 EAL: No shared files mode enabled, IPC is disabled 00:04:24.537 EAL: Heap on socket 0 was expanded by 66MB 00:04:24.797 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.797 EAL: request: mp_malloc_sync 00:04:24.797 EAL: No shared files mode enabled, IPC is disabled 00:04:24.797 EAL: Heap on socket 0 was shrunk by 66MB 00:04:24.797 EAL: Trying to obtain current memory policy. 00:04:24.797 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:24.797 EAL: Restoring previous memory policy: 4 00:04:24.797 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.797 EAL: request: mp_malloc_sync 00:04:24.797 EAL: No shared files mode enabled, IPC is disabled 00:04:24.797 EAL: Heap on socket 0 was expanded by 130MB 00:04:25.057 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.057 EAL: request: mp_malloc_sync 00:04:25.057 EAL: No shared files mode enabled, IPC is disabled 00:04:25.057 EAL: Heap on socket 0 was shrunk by 130MB 00:04:25.317 EAL: Trying to obtain current memory policy. 00:04:25.317 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.317 EAL: Restoring previous memory policy: 4 00:04:25.317 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.317 EAL: request: mp_malloc_sync 00:04:25.317 EAL: No shared files mode enabled, IPC is disabled 00:04:25.317 EAL: Heap on socket 0 was expanded by 258MB 00:04:25.887 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.887 EAL: request: mp_malloc_sync 00:04:25.887 EAL: No shared files mode enabled, IPC is disabled 00:04:25.887 EAL: Heap on socket 0 was shrunk by 258MB 00:04:26.146 EAL: Trying to obtain current memory policy. 00:04:26.146 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:26.405 EAL: Restoring previous memory policy: 4 00:04:26.405 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.405 EAL: request: mp_malloc_sync 00:04:26.405 EAL: No shared files mode enabled, IPC is disabled 00:04:26.405 EAL: Heap on socket 0 was expanded by 514MB 00:04:27.393 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.393 EAL: request: mp_malloc_sync 00:04:27.393 EAL: No shared files mode enabled, IPC is disabled 00:04:27.393 EAL: Heap on socket 0 was shrunk by 514MB 00:04:27.963 EAL: Trying to obtain current memory policy. 00:04:27.963 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:28.224 EAL: Restoring previous memory policy: 4 00:04:28.224 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.224 EAL: request: mp_malloc_sync 00:04:28.224 EAL: No shared files mode enabled, IPC is disabled 00:04:28.224 EAL: Heap on socket 0 was expanded by 1026MB 00:04:30.134 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.134 EAL: request: mp_malloc_sync 00:04:30.134 EAL: No shared files mode enabled, IPC is disabled 00:04:30.134 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:32.046 passed 00:04:32.046 00:04:32.046 Run Summary: Type Total Ran Passed Failed Inactive 00:04:32.046 suites 1 1 n/a 0 0 00:04:32.046 tests 2 2 2 0 0 00:04:32.046 asserts 5754 5754 5754 0 n/a 00:04:32.046 00:04:32.046 Elapsed time = 7.859 seconds 00:04:32.046 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.046 EAL: request: mp_malloc_sync 00:04:32.046 EAL: No shared files mode enabled, IPC is disabled 00:04:32.046 EAL: Heap on socket 0 was shrunk by 2MB 00:04:32.046 EAL: No shared files mode enabled, IPC is disabled 00:04:32.046 EAL: No shared files mode enabled, IPC is disabled 00:04:32.046 EAL: No shared files mode enabled, IPC is disabled 00:04:32.046 00:04:32.046 real 0m8.183s 00:04:32.046 user 0m7.235s 00:04:32.046 sys 0m0.790s 00:04:32.046 11:56:35 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.046 ************************************ 00:04:32.046 END TEST env_vtophys 00:04:32.046 ************************************ 00:04:32.046 11:56:35 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:32.046 11:56:35 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:32.046 11:56:35 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.046 11:56:35 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.046 11:56:35 env -- common/autotest_common.sh@10 -- # set +x 00:04:32.046 ************************************ 00:04:32.047 START TEST env_pci 00:04:32.047 ************************************ 00:04:32.047 11:56:35 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:32.047 00:04:32.047 00:04:32.047 CUnit - A unit testing framework for C - Version 2.1-3 00:04:32.047 http://cunit.sourceforge.net/ 00:04:32.047 00:04:32.047 00:04:32.047 Suite: pci 00:04:32.047 Test: pci_hook ...[2024-11-19 11:56:35.319365] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56825 has claimed it 00:04:32.047 passed 00:04:32.047 00:04:32.047 Run Summary: Type Total Ran Passed Failed Inactive 00:04:32.047 suites 1 1 n/a 0 0 00:04:32.047 tests 1 1 1 0 0 00:04:32.047 asserts 25 25 25 0 n/a 00:04:32.047 00:04:32.047 Elapsed time = 0.006 seconds 00:04:32.047 EAL: Cannot find device (10000:00:01.0) 00:04:32.047 EAL: Failed to attach device on primary process 00:04:32.047 00:04:32.047 real 0m0.099s 00:04:32.047 user 0m0.039s 00:04:32.047 sys 0m0.058s 00:04:32.047 ************************************ 00:04:32.047 END TEST env_pci 00:04:32.047 ************************************ 00:04:32.047 11:56:35 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.047 11:56:35 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:32.307 11:56:35 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:32.307 11:56:35 env -- env/env.sh@15 -- # uname 00:04:32.307 11:56:35 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:32.307 11:56:35 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:32.307 11:56:35 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:32.307 11:56:35 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:32.307 11:56:35 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.307 11:56:35 env -- common/autotest_common.sh@10 -- # set +x 00:04:32.307 ************************************ 00:04:32.307 START TEST env_dpdk_post_init 00:04:32.307 ************************************ 00:04:32.307 11:56:35 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:32.307 EAL: Detected CPU lcores: 10 00:04:32.307 EAL: Detected NUMA nodes: 1 00:04:32.307 EAL: Detected shared linkage of DPDK 00:04:32.307 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:32.307 EAL: Selected IOVA mode 'PA' 00:04:32.307 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:32.307 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:32.307 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:32.566 Starting DPDK initialization... 00:04:32.566 Starting SPDK post initialization... 00:04:32.566 SPDK NVMe probe 00:04:32.566 Attaching to 0000:00:10.0 00:04:32.566 Attaching to 0000:00:11.0 00:04:32.566 Attached to 0000:00:10.0 00:04:32.566 Attached to 0000:00:11.0 00:04:32.566 Cleaning up... 00:04:32.566 00:04:32.566 real 0m0.279s 00:04:32.566 user 0m0.091s 00:04:32.566 sys 0m0.088s 00:04:32.566 11:56:35 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.566 11:56:35 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:32.566 ************************************ 00:04:32.566 END TEST env_dpdk_post_init 00:04:32.566 ************************************ 00:04:32.566 11:56:35 env -- env/env.sh@26 -- # uname 00:04:32.566 11:56:35 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:32.566 11:56:35 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:32.566 11:56:35 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.566 11:56:35 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.566 11:56:35 env -- common/autotest_common.sh@10 -- # set +x 00:04:32.566 ************************************ 00:04:32.566 START TEST env_mem_callbacks 00:04:32.566 ************************************ 00:04:32.566 11:56:35 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:32.566 EAL: Detected CPU lcores: 10 00:04:32.566 EAL: Detected NUMA nodes: 1 00:04:32.566 EAL: Detected shared linkage of DPDK 00:04:32.566 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:32.566 EAL: Selected IOVA mode 'PA' 00:04:32.827 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:32.827 00:04:32.827 00:04:32.827 CUnit - A unit testing framework for C - Version 2.1-3 00:04:32.827 http://cunit.sourceforge.net/ 00:04:32.827 00:04:32.827 00:04:32.827 Suite: memory 00:04:32.827 Test: test ... 00:04:32.827 register 0x200000200000 2097152 00:04:32.827 malloc 3145728 00:04:32.827 register 0x200000400000 4194304 00:04:32.827 buf 0x2000004fffc0 len 3145728 PASSED 00:04:32.827 malloc 64 00:04:32.827 buf 0x2000004ffec0 len 64 PASSED 00:04:32.827 malloc 4194304 00:04:32.827 register 0x200000800000 6291456 00:04:32.827 buf 0x2000009fffc0 len 4194304 PASSED 00:04:32.827 free 0x2000004fffc0 3145728 00:04:32.827 free 0x2000004ffec0 64 00:04:32.827 unregister 0x200000400000 4194304 PASSED 00:04:32.827 free 0x2000009fffc0 4194304 00:04:32.827 unregister 0x200000800000 6291456 PASSED 00:04:32.827 malloc 8388608 00:04:32.827 register 0x200000400000 10485760 00:04:32.827 buf 0x2000005fffc0 len 8388608 PASSED 00:04:32.827 free 0x2000005fffc0 8388608 00:04:32.827 unregister 0x200000400000 10485760 PASSED 00:04:32.827 passed 00:04:32.827 00:04:32.827 Run Summary: Type Total Ran Passed Failed Inactive 00:04:32.827 suites 1 1 n/a 0 0 00:04:32.827 tests 1 1 1 0 0 00:04:32.827 asserts 15 15 15 0 n/a 00:04:32.827 00:04:32.827 Elapsed time = 0.081 seconds 00:04:32.827 00:04:32.827 real 0m0.280s 00:04:32.827 user 0m0.112s 00:04:32.827 sys 0m0.064s 00:04:32.827 11:56:36 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.827 11:56:36 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:32.827 ************************************ 00:04:32.827 END TEST env_mem_callbacks 00:04:32.827 ************************************ 00:04:32.827 00:04:32.827 real 0m9.692s 00:04:32.827 user 0m7.953s 00:04:32.827 sys 0m1.374s 00:04:32.827 11:56:36 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.827 11:56:36 env -- common/autotest_common.sh@10 -- # set +x 00:04:32.827 ************************************ 00:04:32.827 END TEST env 00:04:32.827 ************************************ 00:04:32.827 11:56:36 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:32.827 11:56:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.827 11:56:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.827 11:56:36 -- common/autotest_common.sh@10 -- # set +x 00:04:32.828 ************************************ 00:04:32.828 START TEST rpc 00:04:32.828 ************************************ 00:04:32.828 11:56:36 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:33.088 * Looking for test storage... 00:04:33.088 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:33.088 11:56:36 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:33.088 11:56:36 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:33.088 11:56:36 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:33.088 11:56:36 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:33.088 11:56:36 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:33.088 11:56:36 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:33.088 11:56:36 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:33.088 11:56:36 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:33.088 11:56:36 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:33.088 11:56:36 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:33.088 11:56:36 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:33.088 11:56:36 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:33.088 11:56:36 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:33.088 11:56:36 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:33.088 11:56:36 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:33.088 11:56:36 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:33.088 11:56:36 rpc -- scripts/common.sh@345 -- # : 1 00:04:33.088 11:56:36 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:33.088 11:56:36 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:33.088 11:56:36 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:33.088 11:56:36 rpc -- scripts/common.sh@353 -- # local d=1 00:04:33.088 11:56:36 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:33.088 11:56:36 rpc -- scripts/common.sh@355 -- # echo 1 00:04:33.088 11:56:36 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:33.088 11:56:36 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:33.088 11:56:36 rpc -- scripts/common.sh@353 -- # local d=2 00:04:33.088 11:56:36 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:33.088 11:56:36 rpc -- scripts/common.sh@355 -- # echo 2 00:04:33.088 11:56:36 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:33.088 11:56:36 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:33.088 11:56:36 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:33.088 11:56:36 rpc -- scripts/common.sh@368 -- # return 0 00:04:33.088 11:56:36 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:33.088 11:56:36 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:33.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.088 --rc genhtml_branch_coverage=1 00:04:33.088 --rc genhtml_function_coverage=1 00:04:33.088 --rc genhtml_legend=1 00:04:33.088 --rc geninfo_all_blocks=1 00:04:33.088 --rc geninfo_unexecuted_blocks=1 00:04:33.088 00:04:33.088 ' 00:04:33.089 11:56:36 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:33.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.089 --rc genhtml_branch_coverage=1 00:04:33.089 --rc genhtml_function_coverage=1 00:04:33.089 --rc genhtml_legend=1 00:04:33.089 --rc geninfo_all_blocks=1 00:04:33.089 --rc geninfo_unexecuted_blocks=1 00:04:33.089 00:04:33.089 ' 00:04:33.089 11:56:36 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:33.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.089 --rc genhtml_branch_coverage=1 00:04:33.089 --rc genhtml_function_coverage=1 00:04:33.089 --rc genhtml_legend=1 00:04:33.089 --rc geninfo_all_blocks=1 00:04:33.089 --rc geninfo_unexecuted_blocks=1 00:04:33.089 00:04:33.089 ' 00:04:33.089 11:56:36 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:33.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.089 --rc genhtml_branch_coverage=1 00:04:33.089 --rc genhtml_function_coverage=1 00:04:33.089 --rc genhtml_legend=1 00:04:33.089 --rc geninfo_all_blocks=1 00:04:33.089 --rc geninfo_unexecuted_blocks=1 00:04:33.089 00:04:33.089 ' 00:04:33.089 11:56:36 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56952 00:04:33.089 11:56:36 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:33.089 11:56:36 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:33.089 11:56:36 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56952 00:04:33.089 11:56:36 rpc -- common/autotest_common.sh@835 -- # '[' -z 56952 ']' 00:04:33.089 11:56:36 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:33.089 11:56:36 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:33.089 11:56:36 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:33.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:33.089 11:56:36 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:33.089 11:56:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.349 [2024-11-19 11:56:36.524621] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:04:33.349 [2024-11-19 11:56:36.524754] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56952 ] 00:04:33.349 [2024-11-19 11:56:36.697193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.610 [2024-11-19 11:56:36.809354] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:33.610 [2024-11-19 11:56:36.809420] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56952' to capture a snapshot of events at runtime. 00:04:33.610 [2024-11-19 11:56:36.809430] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:33.610 [2024-11-19 11:56:36.809440] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:33.610 [2024-11-19 11:56:36.809448] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56952 for offline analysis/debug. 00:04:33.610 [2024-11-19 11:56:36.810514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.549 11:56:37 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:34.549 11:56:37 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:34.549 11:56:37 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:34.549 11:56:37 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:34.549 11:56:37 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:34.549 11:56:37 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:34.549 11:56:37 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:34.549 11:56:37 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:34.549 11:56:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.549 ************************************ 00:04:34.549 START TEST rpc_integrity 00:04:34.549 ************************************ 00:04:34.549 11:56:37 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:34.549 11:56:37 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:34.549 11:56:37 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.549 11:56:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.549 11:56:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.549 11:56:37 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:34.549 11:56:37 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:34.549 11:56:37 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:34.549 11:56:37 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:34.549 11:56:37 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.549 11:56:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.549 11:56:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.549 11:56:37 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:34.549 11:56:37 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:34.549 11:56:37 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.549 11:56:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.549 11:56:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.549 11:56:37 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:34.549 { 00:04:34.550 "name": "Malloc0", 00:04:34.550 "aliases": [ 00:04:34.550 "f4d5c0e3-8ab0-40d7-9d50-cb114253afea" 00:04:34.550 ], 00:04:34.550 "product_name": "Malloc disk", 00:04:34.550 "block_size": 512, 00:04:34.550 "num_blocks": 16384, 00:04:34.550 "uuid": "f4d5c0e3-8ab0-40d7-9d50-cb114253afea", 00:04:34.550 "assigned_rate_limits": { 00:04:34.550 "rw_ios_per_sec": 0, 00:04:34.550 "rw_mbytes_per_sec": 0, 00:04:34.550 "r_mbytes_per_sec": 0, 00:04:34.550 "w_mbytes_per_sec": 0 00:04:34.550 }, 00:04:34.550 "claimed": false, 00:04:34.550 "zoned": false, 00:04:34.550 "supported_io_types": { 00:04:34.550 "read": true, 00:04:34.550 "write": true, 00:04:34.550 "unmap": true, 00:04:34.550 "flush": true, 00:04:34.550 "reset": true, 00:04:34.550 "nvme_admin": false, 00:04:34.550 "nvme_io": false, 00:04:34.550 "nvme_io_md": false, 00:04:34.550 "write_zeroes": true, 00:04:34.550 "zcopy": true, 00:04:34.550 "get_zone_info": false, 00:04:34.550 "zone_management": false, 00:04:34.550 "zone_append": false, 00:04:34.550 "compare": false, 00:04:34.550 "compare_and_write": false, 00:04:34.550 "abort": true, 00:04:34.550 "seek_hole": false, 00:04:34.550 "seek_data": false, 00:04:34.550 "copy": true, 00:04:34.550 "nvme_iov_md": false 00:04:34.550 }, 00:04:34.550 "memory_domains": [ 00:04:34.550 { 00:04:34.550 "dma_device_id": "system", 00:04:34.550 "dma_device_type": 1 00:04:34.550 }, 00:04:34.550 { 00:04:34.550 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:34.550 "dma_device_type": 2 00:04:34.550 } 00:04:34.550 ], 00:04:34.550 "driver_specific": {} 00:04:34.550 } 00:04:34.550 ]' 00:04:34.550 11:56:37 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:34.550 11:56:37 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:34.550 11:56:37 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:34.550 11:56:37 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.550 11:56:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.550 [2024-11-19 11:56:37.839501] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:34.550 [2024-11-19 11:56:37.839585] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:34.550 [2024-11-19 11:56:37.839615] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:04:34.550 [2024-11-19 11:56:37.839635] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:34.550 [2024-11-19 11:56:37.842565] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:34.550 [2024-11-19 11:56:37.842614] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:34.550 Passthru0 00:04:34.550 11:56:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.550 11:56:37 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:34.550 11:56:37 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.550 11:56:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.550 11:56:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.550 11:56:37 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:34.550 { 00:04:34.550 "name": "Malloc0", 00:04:34.550 "aliases": [ 00:04:34.550 "f4d5c0e3-8ab0-40d7-9d50-cb114253afea" 00:04:34.550 ], 00:04:34.550 "product_name": "Malloc disk", 00:04:34.550 "block_size": 512, 00:04:34.550 "num_blocks": 16384, 00:04:34.550 "uuid": "f4d5c0e3-8ab0-40d7-9d50-cb114253afea", 00:04:34.550 "assigned_rate_limits": { 00:04:34.550 "rw_ios_per_sec": 0, 00:04:34.550 "rw_mbytes_per_sec": 0, 00:04:34.550 "r_mbytes_per_sec": 0, 00:04:34.550 "w_mbytes_per_sec": 0 00:04:34.550 }, 00:04:34.550 "claimed": true, 00:04:34.550 "claim_type": "exclusive_write", 00:04:34.550 "zoned": false, 00:04:34.550 "supported_io_types": { 00:04:34.550 "read": true, 00:04:34.550 "write": true, 00:04:34.550 "unmap": true, 00:04:34.550 "flush": true, 00:04:34.550 "reset": true, 00:04:34.550 "nvme_admin": false, 00:04:34.550 "nvme_io": false, 00:04:34.550 "nvme_io_md": false, 00:04:34.550 "write_zeroes": true, 00:04:34.550 "zcopy": true, 00:04:34.550 "get_zone_info": false, 00:04:34.550 "zone_management": false, 00:04:34.550 "zone_append": false, 00:04:34.550 "compare": false, 00:04:34.550 "compare_and_write": false, 00:04:34.550 "abort": true, 00:04:34.550 "seek_hole": false, 00:04:34.550 "seek_data": false, 00:04:34.550 "copy": true, 00:04:34.550 "nvme_iov_md": false 00:04:34.550 }, 00:04:34.550 "memory_domains": [ 00:04:34.550 { 00:04:34.550 "dma_device_id": "system", 00:04:34.550 "dma_device_type": 1 00:04:34.550 }, 00:04:34.550 { 00:04:34.550 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:34.550 "dma_device_type": 2 00:04:34.550 } 00:04:34.550 ], 00:04:34.550 "driver_specific": {} 00:04:34.550 }, 00:04:34.550 { 00:04:34.550 "name": "Passthru0", 00:04:34.550 "aliases": [ 00:04:34.550 "ddfe55aa-616f-5497-8708-e5699793dcad" 00:04:34.550 ], 00:04:34.550 "product_name": "passthru", 00:04:34.550 "block_size": 512, 00:04:34.550 "num_blocks": 16384, 00:04:34.550 "uuid": "ddfe55aa-616f-5497-8708-e5699793dcad", 00:04:34.550 "assigned_rate_limits": { 00:04:34.550 "rw_ios_per_sec": 0, 00:04:34.550 "rw_mbytes_per_sec": 0, 00:04:34.550 "r_mbytes_per_sec": 0, 00:04:34.550 "w_mbytes_per_sec": 0 00:04:34.550 }, 00:04:34.550 "claimed": false, 00:04:34.550 "zoned": false, 00:04:34.550 "supported_io_types": { 00:04:34.550 "read": true, 00:04:34.550 "write": true, 00:04:34.550 "unmap": true, 00:04:34.550 "flush": true, 00:04:34.550 "reset": true, 00:04:34.550 "nvme_admin": false, 00:04:34.550 "nvme_io": false, 00:04:34.550 "nvme_io_md": false, 00:04:34.550 "write_zeroes": true, 00:04:34.550 "zcopy": true, 00:04:34.550 "get_zone_info": false, 00:04:34.550 "zone_management": false, 00:04:34.550 "zone_append": false, 00:04:34.550 "compare": false, 00:04:34.550 "compare_and_write": false, 00:04:34.550 "abort": true, 00:04:34.550 "seek_hole": false, 00:04:34.550 "seek_data": false, 00:04:34.550 "copy": true, 00:04:34.550 "nvme_iov_md": false 00:04:34.550 }, 00:04:34.550 "memory_domains": [ 00:04:34.550 { 00:04:34.550 "dma_device_id": "system", 00:04:34.550 "dma_device_type": 1 00:04:34.550 }, 00:04:34.550 { 00:04:34.550 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:34.550 "dma_device_type": 2 00:04:34.550 } 00:04:34.550 ], 00:04:34.550 "driver_specific": { 00:04:34.550 "passthru": { 00:04:34.550 "name": "Passthru0", 00:04:34.550 "base_bdev_name": "Malloc0" 00:04:34.550 } 00:04:34.550 } 00:04:34.550 } 00:04:34.550 ]' 00:04:34.550 11:56:37 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:34.550 11:56:37 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:34.550 11:56:37 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:34.550 11:56:37 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.550 11:56:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.810 11:56:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.810 11:56:37 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:34.810 11:56:37 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.810 11:56:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.810 11:56:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.810 11:56:37 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:34.810 11:56:37 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.810 11:56:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.810 11:56:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.810 11:56:37 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:34.810 11:56:37 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:34.810 ************************************ 00:04:34.810 END TEST rpc_integrity 00:04:34.810 ************************************ 00:04:34.810 11:56:38 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:34.810 00:04:34.810 real 0m0.369s 00:04:34.810 user 0m0.191s 00:04:34.810 sys 0m0.062s 00:04:34.810 11:56:38 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:34.810 11:56:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.810 11:56:38 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:34.810 11:56:38 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:34.810 11:56:38 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:34.810 11:56:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.811 ************************************ 00:04:34.811 START TEST rpc_plugins 00:04:34.811 ************************************ 00:04:34.811 11:56:38 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:34.811 11:56:38 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:34.811 11:56:38 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.811 11:56:38 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:34.811 11:56:38 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.811 11:56:38 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:34.811 11:56:38 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:34.811 11:56:38 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.811 11:56:38 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:34.811 11:56:38 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.811 11:56:38 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:34.811 { 00:04:34.811 "name": "Malloc1", 00:04:34.811 "aliases": [ 00:04:34.811 "ecca2b5c-a7d8-4761-ae43-4d3102cb9ce8" 00:04:34.811 ], 00:04:34.811 "product_name": "Malloc disk", 00:04:34.811 "block_size": 4096, 00:04:34.811 "num_blocks": 256, 00:04:34.811 "uuid": "ecca2b5c-a7d8-4761-ae43-4d3102cb9ce8", 00:04:34.811 "assigned_rate_limits": { 00:04:34.811 "rw_ios_per_sec": 0, 00:04:34.811 "rw_mbytes_per_sec": 0, 00:04:34.811 "r_mbytes_per_sec": 0, 00:04:34.811 "w_mbytes_per_sec": 0 00:04:34.811 }, 00:04:34.811 "claimed": false, 00:04:34.811 "zoned": false, 00:04:34.811 "supported_io_types": { 00:04:34.811 "read": true, 00:04:34.811 "write": true, 00:04:34.811 "unmap": true, 00:04:34.811 "flush": true, 00:04:34.811 "reset": true, 00:04:34.811 "nvme_admin": false, 00:04:34.811 "nvme_io": false, 00:04:34.811 "nvme_io_md": false, 00:04:34.811 "write_zeroes": true, 00:04:34.811 "zcopy": true, 00:04:34.811 "get_zone_info": false, 00:04:34.811 "zone_management": false, 00:04:34.811 "zone_append": false, 00:04:34.811 "compare": false, 00:04:34.811 "compare_and_write": false, 00:04:34.811 "abort": true, 00:04:34.811 "seek_hole": false, 00:04:34.811 "seek_data": false, 00:04:34.811 "copy": true, 00:04:34.811 "nvme_iov_md": false 00:04:34.811 }, 00:04:34.811 "memory_domains": [ 00:04:34.811 { 00:04:34.811 "dma_device_id": "system", 00:04:34.811 "dma_device_type": 1 00:04:34.811 }, 00:04:34.811 { 00:04:34.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:34.811 "dma_device_type": 2 00:04:34.811 } 00:04:34.811 ], 00:04:34.811 "driver_specific": {} 00:04:34.811 } 00:04:34.811 ]' 00:04:34.811 11:56:38 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:35.072 11:56:38 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:35.072 11:56:38 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:35.072 11:56:38 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.072 11:56:38 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:35.072 11:56:38 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.072 11:56:38 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:35.072 11:56:38 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.072 11:56:38 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:35.072 11:56:38 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.072 11:56:38 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:35.072 11:56:38 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:35.072 ************************************ 00:04:35.072 END TEST rpc_plugins 00:04:35.072 ************************************ 00:04:35.072 11:56:38 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:35.072 00:04:35.072 real 0m0.172s 00:04:35.072 user 0m0.092s 00:04:35.072 sys 0m0.031s 00:04:35.072 11:56:38 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:35.072 11:56:38 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:35.072 11:56:38 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:35.072 11:56:38 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:35.072 11:56:38 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:35.072 11:56:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.072 ************************************ 00:04:35.072 START TEST rpc_trace_cmd_test 00:04:35.072 ************************************ 00:04:35.072 11:56:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:35.072 11:56:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:35.072 11:56:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:35.072 11:56:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.072 11:56:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:35.072 11:56:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.072 11:56:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:35.072 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56952", 00:04:35.072 "tpoint_group_mask": "0x8", 00:04:35.072 "iscsi_conn": { 00:04:35.072 "mask": "0x2", 00:04:35.072 "tpoint_mask": "0x0" 00:04:35.072 }, 00:04:35.072 "scsi": { 00:04:35.072 "mask": "0x4", 00:04:35.072 "tpoint_mask": "0x0" 00:04:35.072 }, 00:04:35.072 "bdev": { 00:04:35.072 "mask": "0x8", 00:04:35.072 "tpoint_mask": "0xffffffffffffffff" 00:04:35.072 }, 00:04:35.072 "nvmf_rdma": { 00:04:35.072 "mask": "0x10", 00:04:35.072 "tpoint_mask": "0x0" 00:04:35.072 }, 00:04:35.072 "nvmf_tcp": { 00:04:35.072 "mask": "0x20", 00:04:35.072 "tpoint_mask": "0x0" 00:04:35.072 }, 00:04:35.072 "ftl": { 00:04:35.072 "mask": "0x40", 00:04:35.072 "tpoint_mask": "0x0" 00:04:35.072 }, 00:04:35.072 "blobfs": { 00:04:35.072 "mask": "0x80", 00:04:35.072 "tpoint_mask": "0x0" 00:04:35.072 }, 00:04:35.072 "dsa": { 00:04:35.072 "mask": "0x200", 00:04:35.072 "tpoint_mask": "0x0" 00:04:35.072 }, 00:04:35.072 "thread": { 00:04:35.072 "mask": "0x400", 00:04:35.072 "tpoint_mask": "0x0" 00:04:35.072 }, 00:04:35.072 "nvme_pcie": { 00:04:35.072 "mask": "0x800", 00:04:35.072 "tpoint_mask": "0x0" 00:04:35.072 }, 00:04:35.072 "iaa": { 00:04:35.072 "mask": "0x1000", 00:04:35.072 "tpoint_mask": "0x0" 00:04:35.072 }, 00:04:35.072 "nvme_tcp": { 00:04:35.072 "mask": "0x2000", 00:04:35.072 "tpoint_mask": "0x0" 00:04:35.072 }, 00:04:35.072 "bdev_nvme": { 00:04:35.072 "mask": "0x4000", 00:04:35.072 "tpoint_mask": "0x0" 00:04:35.072 }, 00:04:35.072 "sock": { 00:04:35.072 "mask": "0x8000", 00:04:35.072 "tpoint_mask": "0x0" 00:04:35.072 }, 00:04:35.072 "blob": { 00:04:35.072 "mask": "0x10000", 00:04:35.072 "tpoint_mask": "0x0" 00:04:35.072 }, 00:04:35.072 "bdev_raid": { 00:04:35.072 "mask": "0x20000", 00:04:35.072 "tpoint_mask": "0x0" 00:04:35.072 }, 00:04:35.072 "scheduler": { 00:04:35.072 "mask": "0x40000", 00:04:35.072 "tpoint_mask": "0x0" 00:04:35.072 } 00:04:35.072 }' 00:04:35.072 11:56:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:35.072 11:56:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:35.072 11:56:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:35.333 11:56:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:35.333 11:56:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:35.333 11:56:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:35.333 11:56:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:35.333 11:56:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:35.333 11:56:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:35.333 ************************************ 00:04:35.333 END TEST rpc_trace_cmd_test 00:04:35.333 ************************************ 00:04:35.333 11:56:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:35.333 00:04:35.333 real 0m0.261s 00:04:35.333 user 0m0.205s 00:04:35.333 sys 0m0.043s 00:04:35.333 11:56:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:35.333 11:56:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:35.333 11:56:38 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:35.333 11:56:38 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:35.333 11:56:38 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:35.333 11:56:38 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:35.333 11:56:38 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:35.333 11:56:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.333 ************************************ 00:04:35.333 START TEST rpc_daemon_integrity 00:04:35.333 ************************************ 00:04:35.333 11:56:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:35.333 11:56:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:35.333 11:56:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.333 11:56:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.333 11:56:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.333 11:56:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:35.333 11:56:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:35.593 11:56:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:35.593 11:56:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:35.593 11:56:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.593 11:56:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.593 11:56:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.593 11:56:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:35.593 11:56:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:35.593 11:56:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.594 11:56:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.594 11:56:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.594 11:56:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:35.594 { 00:04:35.594 "name": "Malloc2", 00:04:35.594 "aliases": [ 00:04:35.594 "a02ed624-5428-4465-8b9a-957182a50654" 00:04:35.594 ], 00:04:35.594 "product_name": "Malloc disk", 00:04:35.594 "block_size": 512, 00:04:35.594 "num_blocks": 16384, 00:04:35.594 "uuid": "a02ed624-5428-4465-8b9a-957182a50654", 00:04:35.594 "assigned_rate_limits": { 00:04:35.594 "rw_ios_per_sec": 0, 00:04:35.594 "rw_mbytes_per_sec": 0, 00:04:35.594 "r_mbytes_per_sec": 0, 00:04:35.594 "w_mbytes_per_sec": 0 00:04:35.594 }, 00:04:35.594 "claimed": false, 00:04:35.594 "zoned": false, 00:04:35.594 "supported_io_types": { 00:04:35.594 "read": true, 00:04:35.594 "write": true, 00:04:35.594 "unmap": true, 00:04:35.594 "flush": true, 00:04:35.594 "reset": true, 00:04:35.594 "nvme_admin": false, 00:04:35.594 "nvme_io": false, 00:04:35.594 "nvme_io_md": false, 00:04:35.594 "write_zeroes": true, 00:04:35.594 "zcopy": true, 00:04:35.594 "get_zone_info": false, 00:04:35.594 "zone_management": false, 00:04:35.594 "zone_append": false, 00:04:35.594 "compare": false, 00:04:35.594 "compare_and_write": false, 00:04:35.594 "abort": true, 00:04:35.594 "seek_hole": false, 00:04:35.594 "seek_data": false, 00:04:35.594 "copy": true, 00:04:35.594 "nvme_iov_md": false 00:04:35.594 }, 00:04:35.594 "memory_domains": [ 00:04:35.594 { 00:04:35.594 "dma_device_id": "system", 00:04:35.594 "dma_device_type": 1 00:04:35.594 }, 00:04:35.594 { 00:04:35.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:35.594 "dma_device_type": 2 00:04:35.594 } 00:04:35.594 ], 00:04:35.594 "driver_specific": {} 00:04:35.594 } 00:04:35.594 ]' 00:04:35.594 11:56:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:35.594 11:56:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:35.594 11:56:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:35.594 11:56:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.594 11:56:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.594 [2024-11-19 11:56:38.817804] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:35.594 [2024-11-19 11:56:38.817885] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:35.594 [2024-11-19 11:56:38.817909] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:04:35.594 [2024-11-19 11:56:38.817922] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:35.594 [2024-11-19 11:56:38.820235] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:35.594 Passthru0 00:04:35.594 [2024-11-19 11:56:38.820324] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:35.594 11:56:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.594 11:56:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:35.594 11:56:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.594 11:56:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.594 11:56:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.594 11:56:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:35.594 { 00:04:35.594 "name": "Malloc2", 00:04:35.594 "aliases": [ 00:04:35.594 "a02ed624-5428-4465-8b9a-957182a50654" 00:04:35.594 ], 00:04:35.594 "product_name": "Malloc disk", 00:04:35.594 "block_size": 512, 00:04:35.594 "num_blocks": 16384, 00:04:35.594 "uuid": "a02ed624-5428-4465-8b9a-957182a50654", 00:04:35.594 "assigned_rate_limits": { 00:04:35.594 "rw_ios_per_sec": 0, 00:04:35.594 "rw_mbytes_per_sec": 0, 00:04:35.594 "r_mbytes_per_sec": 0, 00:04:35.594 "w_mbytes_per_sec": 0 00:04:35.594 }, 00:04:35.594 "claimed": true, 00:04:35.594 "claim_type": "exclusive_write", 00:04:35.594 "zoned": false, 00:04:35.594 "supported_io_types": { 00:04:35.594 "read": true, 00:04:35.594 "write": true, 00:04:35.594 "unmap": true, 00:04:35.594 "flush": true, 00:04:35.594 "reset": true, 00:04:35.594 "nvme_admin": false, 00:04:35.594 "nvme_io": false, 00:04:35.594 "nvme_io_md": false, 00:04:35.594 "write_zeroes": true, 00:04:35.594 "zcopy": true, 00:04:35.594 "get_zone_info": false, 00:04:35.594 "zone_management": false, 00:04:35.594 "zone_append": false, 00:04:35.594 "compare": false, 00:04:35.594 "compare_and_write": false, 00:04:35.594 "abort": true, 00:04:35.594 "seek_hole": false, 00:04:35.594 "seek_data": false, 00:04:35.594 "copy": true, 00:04:35.594 "nvme_iov_md": false 00:04:35.594 }, 00:04:35.594 "memory_domains": [ 00:04:35.594 { 00:04:35.594 "dma_device_id": "system", 00:04:35.594 "dma_device_type": 1 00:04:35.594 }, 00:04:35.594 { 00:04:35.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:35.594 "dma_device_type": 2 00:04:35.594 } 00:04:35.594 ], 00:04:35.594 "driver_specific": {} 00:04:35.594 }, 00:04:35.594 { 00:04:35.594 "name": "Passthru0", 00:04:35.594 "aliases": [ 00:04:35.594 "6340f3ef-553e-59ff-9ce2-9444c550d3a5" 00:04:35.594 ], 00:04:35.594 "product_name": "passthru", 00:04:35.594 "block_size": 512, 00:04:35.594 "num_blocks": 16384, 00:04:35.594 "uuid": "6340f3ef-553e-59ff-9ce2-9444c550d3a5", 00:04:35.594 "assigned_rate_limits": { 00:04:35.594 "rw_ios_per_sec": 0, 00:04:35.594 "rw_mbytes_per_sec": 0, 00:04:35.594 "r_mbytes_per_sec": 0, 00:04:35.594 "w_mbytes_per_sec": 0 00:04:35.594 }, 00:04:35.594 "claimed": false, 00:04:35.594 "zoned": false, 00:04:35.594 "supported_io_types": { 00:04:35.594 "read": true, 00:04:35.594 "write": true, 00:04:35.594 "unmap": true, 00:04:35.594 "flush": true, 00:04:35.594 "reset": true, 00:04:35.594 "nvme_admin": false, 00:04:35.594 "nvme_io": false, 00:04:35.594 "nvme_io_md": false, 00:04:35.594 "write_zeroes": true, 00:04:35.594 "zcopy": true, 00:04:35.594 "get_zone_info": false, 00:04:35.594 "zone_management": false, 00:04:35.594 "zone_append": false, 00:04:35.594 "compare": false, 00:04:35.594 "compare_and_write": false, 00:04:35.594 "abort": true, 00:04:35.594 "seek_hole": false, 00:04:35.594 "seek_data": false, 00:04:35.594 "copy": true, 00:04:35.594 "nvme_iov_md": false 00:04:35.594 }, 00:04:35.594 "memory_domains": [ 00:04:35.594 { 00:04:35.594 "dma_device_id": "system", 00:04:35.594 "dma_device_type": 1 00:04:35.594 }, 00:04:35.594 { 00:04:35.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:35.594 "dma_device_type": 2 00:04:35.594 } 00:04:35.594 ], 00:04:35.594 "driver_specific": { 00:04:35.594 "passthru": { 00:04:35.594 "name": "Passthru0", 00:04:35.594 "base_bdev_name": "Malloc2" 00:04:35.594 } 00:04:35.594 } 00:04:35.594 } 00:04:35.594 ]' 00:04:35.594 11:56:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:35.594 11:56:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:35.594 11:56:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:35.595 11:56:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.595 11:56:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.595 11:56:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.595 11:56:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:35.595 11:56:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.595 11:56:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.595 11:56:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.595 11:56:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:35.595 11:56:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.595 11:56:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.595 11:56:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.595 11:56:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:35.595 11:56:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:35.855 11:56:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:35.855 00:04:35.855 real 0m0.331s 00:04:35.855 user 0m0.183s 00:04:35.855 sys 0m0.047s 00:04:35.855 11:56:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:35.855 11:56:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.855 ************************************ 00:04:35.855 END TEST rpc_daemon_integrity 00:04:35.855 ************************************ 00:04:35.856 11:56:39 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:35.856 11:56:39 rpc -- rpc/rpc.sh@84 -- # killprocess 56952 00:04:35.856 11:56:39 rpc -- common/autotest_common.sh@954 -- # '[' -z 56952 ']' 00:04:35.856 11:56:39 rpc -- common/autotest_common.sh@958 -- # kill -0 56952 00:04:35.856 11:56:39 rpc -- common/autotest_common.sh@959 -- # uname 00:04:35.856 11:56:39 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:35.856 11:56:39 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56952 00:04:35.856 11:56:39 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:35.856 11:56:39 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:35.856 11:56:39 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56952' 00:04:35.856 killing process with pid 56952 00:04:35.856 11:56:39 rpc -- common/autotest_common.sh@973 -- # kill 56952 00:04:35.856 11:56:39 rpc -- common/autotest_common.sh@978 -- # wait 56952 00:04:38.416 00:04:38.416 real 0m5.220s 00:04:38.416 user 0m5.751s 00:04:38.416 sys 0m0.932s 00:04:38.416 11:56:41 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:38.416 11:56:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.416 ************************************ 00:04:38.416 END TEST rpc 00:04:38.416 ************************************ 00:04:38.416 11:56:41 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:38.416 11:56:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:38.416 11:56:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.416 11:56:41 -- common/autotest_common.sh@10 -- # set +x 00:04:38.416 ************************************ 00:04:38.416 START TEST skip_rpc 00:04:38.416 ************************************ 00:04:38.416 11:56:41 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:38.416 * Looking for test storage... 00:04:38.416 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:38.416 11:56:41 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:38.416 11:56:41 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:38.416 11:56:41 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:38.416 11:56:41 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:38.416 11:56:41 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:38.416 11:56:41 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:38.416 11:56:41 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:38.416 11:56:41 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:38.416 11:56:41 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:38.416 11:56:41 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:38.416 11:56:41 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:38.416 11:56:41 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:38.416 11:56:41 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:38.416 11:56:41 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:38.416 11:56:41 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:38.416 11:56:41 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:38.416 11:56:41 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:38.416 11:56:41 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:38.416 11:56:41 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:38.416 11:56:41 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:38.416 11:56:41 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:38.416 11:56:41 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:38.416 11:56:41 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:38.416 11:56:41 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:38.416 11:56:41 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:38.416 11:56:41 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:38.416 11:56:41 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:38.417 11:56:41 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:38.417 11:56:41 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:38.417 11:56:41 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:38.417 11:56:41 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:38.417 11:56:41 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:38.417 11:56:41 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:38.417 11:56:41 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:38.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.417 --rc genhtml_branch_coverage=1 00:04:38.417 --rc genhtml_function_coverage=1 00:04:38.417 --rc genhtml_legend=1 00:04:38.417 --rc geninfo_all_blocks=1 00:04:38.417 --rc geninfo_unexecuted_blocks=1 00:04:38.417 00:04:38.417 ' 00:04:38.417 11:56:41 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:38.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.417 --rc genhtml_branch_coverage=1 00:04:38.417 --rc genhtml_function_coverage=1 00:04:38.417 --rc genhtml_legend=1 00:04:38.417 --rc geninfo_all_blocks=1 00:04:38.417 --rc geninfo_unexecuted_blocks=1 00:04:38.417 00:04:38.417 ' 00:04:38.417 11:56:41 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:38.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.417 --rc genhtml_branch_coverage=1 00:04:38.417 --rc genhtml_function_coverage=1 00:04:38.417 --rc genhtml_legend=1 00:04:38.417 --rc geninfo_all_blocks=1 00:04:38.417 --rc geninfo_unexecuted_blocks=1 00:04:38.417 00:04:38.417 ' 00:04:38.417 11:56:41 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:38.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.417 --rc genhtml_branch_coverage=1 00:04:38.417 --rc genhtml_function_coverage=1 00:04:38.417 --rc genhtml_legend=1 00:04:38.417 --rc geninfo_all_blocks=1 00:04:38.417 --rc geninfo_unexecuted_blocks=1 00:04:38.417 00:04:38.417 ' 00:04:38.417 11:56:41 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:38.417 11:56:41 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:38.417 11:56:41 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:38.417 11:56:41 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:38.417 11:56:41 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.417 11:56:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.417 ************************************ 00:04:38.417 START TEST skip_rpc 00:04:38.417 ************************************ 00:04:38.417 11:56:41 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:38.417 11:56:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57181 00:04:38.417 11:56:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:38.417 11:56:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:38.417 11:56:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:38.417 [2024-11-19 11:56:41.777813] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:04:38.417 [2024-11-19 11:56:41.777924] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57181 ] 00:04:38.676 [2024-11-19 11:56:41.950059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.936 [2024-11-19 11:56:42.073346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.223 11:56:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:44.223 11:56:46 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:44.223 11:56:46 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:44.223 11:56:46 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:44.223 11:56:46 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:44.223 11:56:46 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:44.223 11:56:46 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:44.223 11:56:46 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:44.223 11:56:46 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.223 11:56:46 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.223 11:56:46 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:44.223 11:56:46 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:44.223 11:56:46 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:44.223 11:56:46 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:44.223 11:56:46 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:44.224 11:56:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:44.224 11:56:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57181 00:04:44.224 11:56:46 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57181 ']' 00:04:44.224 11:56:46 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57181 00:04:44.224 11:56:46 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:44.224 11:56:46 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:44.224 11:56:46 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57181 00:04:44.224 11:56:46 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:44.224 11:56:46 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:44.224 11:56:46 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57181' 00:04:44.224 killing process with pid 57181 00:04:44.224 11:56:46 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57181 00:04:44.224 11:56:46 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57181 00:04:46.138 00:04:46.138 real 0m7.425s 00:04:46.138 user 0m6.980s 00:04:46.138 sys 0m0.365s 00:04:46.138 11:56:49 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:46.138 11:56:49 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.138 ************************************ 00:04:46.138 END TEST skip_rpc 00:04:46.138 ************************************ 00:04:46.138 11:56:49 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:46.138 11:56:49 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:46.138 11:56:49 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.138 11:56:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.138 ************************************ 00:04:46.138 START TEST skip_rpc_with_json 00:04:46.138 ************************************ 00:04:46.138 11:56:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:46.138 11:56:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:46.138 11:56:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57285 00:04:46.138 11:56:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:46.138 11:56:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:46.138 11:56:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57285 00:04:46.138 11:56:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57285 ']' 00:04:46.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.138 11:56:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.138 11:56:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:46.138 11:56:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.138 11:56:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:46.138 11:56:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:46.138 [2024-11-19 11:56:49.275638] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:04:46.138 [2024-11-19 11:56:49.275757] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57285 ] 00:04:46.138 [2024-11-19 11:56:49.450165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.398 [2024-11-19 11:56:49.570327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.337 11:56:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:47.337 11:56:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:47.337 11:56:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:47.337 11:56:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:47.337 11:56:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:47.337 [2024-11-19 11:56:50.432244] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:47.337 request: 00:04:47.337 { 00:04:47.337 "trtype": "tcp", 00:04:47.337 "method": "nvmf_get_transports", 00:04:47.337 "req_id": 1 00:04:47.337 } 00:04:47.337 Got JSON-RPC error response 00:04:47.337 response: 00:04:47.337 { 00:04:47.337 "code": -19, 00:04:47.337 "message": "No such device" 00:04:47.337 } 00:04:47.337 11:56:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:47.337 11:56:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:47.337 11:56:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:47.337 11:56:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:47.337 [2024-11-19 11:56:50.444326] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:47.337 11:56:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:47.337 11:56:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:47.337 11:56:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:47.337 11:56:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:47.337 11:56:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:47.337 11:56:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:47.337 { 00:04:47.337 "subsystems": [ 00:04:47.337 { 00:04:47.337 "subsystem": "fsdev", 00:04:47.337 "config": [ 00:04:47.337 { 00:04:47.337 "method": "fsdev_set_opts", 00:04:47.337 "params": { 00:04:47.337 "fsdev_io_pool_size": 65535, 00:04:47.337 "fsdev_io_cache_size": 256 00:04:47.337 } 00:04:47.337 } 00:04:47.337 ] 00:04:47.337 }, 00:04:47.337 { 00:04:47.337 "subsystem": "keyring", 00:04:47.337 "config": [] 00:04:47.337 }, 00:04:47.337 { 00:04:47.337 "subsystem": "iobuf", 00:04:47.337 "config": [ 00:04:47.337 { 00:04:47.337 "method": "iobuf_set_options", 00:04:47.337 "params": { 00:04:47.337 "small_pool_count": 8192, 00:04:47.337 "large_pool_count": 1024, 00:04:47.337 "small_bufsize": 8192, 00:04:47.337 "large_bufsize": 135168, 00:04:47.337 "enable_numa": false 00:04:47.337 } 00:04:47.337 } 00:04:47.337 ] 00:04:47.337 }, 00:04:47.337 { 00:04:47.337 "subsystem": "sock", 00:04:47.337 "config": [ 00:04:47.337 { 00:04:47.337 "method": "sock_set_default_impl", 00:04:47.337 "params": { 00:04:47.337 "impl_name": "posix" 00:04:47.337 } 00:04:47.337 }, 00:04:47.337 { 00:04:47.337 "method": "sock_impl_set_options", 00:04:47.337 "params": { 00:04:47.337 "impl_name": "ssl", 00:04:47.337 "recv_buf_size": 4096, 00:04:47.337 "send_buf_size": 4096, 00:04:47.337 "enable_recv_pipe": true, 00:04:47.337 "enable_quickack": false, 00:04:47.337 "enable_placement_id": 0, 00:04:47.337 "enable_zerocopy_send_server": true, 00:04:47.337 "enable_zerocopy_send_client": false, 00:04:47.337 "zerocopy_threshold": 0, 00:04:47.337 "tls_version": 0, 00:04:47.337 "enable_ktls": false 00:04:47.337 } 00:04:47.337 }, 00:04:47.337 { 00:04:47.337 "method": "sock_impl_set_options", 00:04:47.337 "params": { 00:04:47.337 "impl_name": "posix", 00:04:47.337 "recv_buf_size": 2097152, 00:04:47.337 "send_buf_size": 2097152, 00:04:47.337 "enable_recv_pipe": true, 00:04:47.337 "enable_quickack": false, 00:04:47.337 "enable_placement_id": 0, 00:04:47.337 "enable_zerocopy_send_server": true, 00:04:47.337 "enable_zerocopy_send_client": false, 00:04:47.337 "zerocopy_threshold": 0, 00:04:47.337 "tls_version": 0, 00:04:47.337 "enable_ktls": false 00:04:47.337 } 00:04:47.337 } 00:04:47.337 ] 00:04:47.337 }, 00:04:47.337 { 00:04:47.337 "subsystem": "vmd", 00:04:47.337 "config": [] 00:04:47.337 }, 00:04:47.337 { 00:04:47.337 "subsystem": "accel", 00:04:47.337 "config": [ 00:04:47.337 { 00:04:47.337 "method": "accel_set_options", 00:04:47.337 "params": { 00:04:47.337 "small_cache_size": 128, 00:04:47.337 "large_cache_size": 16, 00:04:47.337 "task_count": 2048, 00:04:47.337 "sequence_count": 2048, 00:04:47.337 "buf_count": 2048 00:04:47.337 } 00:04:47.337 } 00:04:47.337 ] 00:04:47.337 }, 00:04:47.337 { 00:04:47.337 "subsystem": "bdev", 00:04:47.337 "config": [ 00:04:47.337 { 00:04:47.337 "method": "bdev_set_options", 00:04:47.337 "params": { 00:04:47.337 "bdev_io_pool_size": 65535, 00:04:47.337 "bdev_io_cache_size": 256, 00:04:47.337 "bdev_auto_examine": true, 00:04:47.337 "iobuf_small_cache_size": 128, 00:04:47.337 "iobuf_large_cache_size": 16 00:04:47.337 } 00:04:47.337 }, 00:04:47.337 { 00:04:47.337 "method": "bdev_raid_set_options", 00:04:47.337 "params": { 00:04:47.337 "process_window_size_kb": 1024, 00:04:47.337 "process_max_bandwidth_mb_sec": 0 00:04:47.337 } 00:04:47.337 }, 00:04:47.337 { 00:04:47.337 "method": "bdev_iscsi_set_options", 00:04:47.337 "params": { 00:04:47.337 "timeout_sec": 30 00:04:47.337 } 00:04:47.337 }, 00:04:47.337 { 00:04:47.337 "method": "bdev_nvme_set_options", 00:04:47.337 "params": { 00:04:47.337 "action_on_timeout": "none", 00:04:47.337 "timeout_us": 0, 00:04:47.337 "timeout_admin_us": 0, 00:04:47.337 "keep_alive_timeout_ms": 10000, 00:04:47.337 "arbitration_burst": 0, 00:04:47.337 "low_priority_weight": 0, 00:04:47.337 "medium_priority_weight": 0, 00:04:47.337 "high_priority_weight": 0, 00:04:47.337 "nvme_adminq_poll_period_us": 10000, 00:04:47.337 "nvme_ioq_poll_period_us": 0, 00:04:47.337 "io_queue_requests": 0, 00:04:47.337 "delay_cmd_submit": true, 00:04:47.337 "transport_retry_count": 4, 00:04:47.337 "bdev_retry_count": 3, 00:04:47.338 "transport_ack_timeout": 0, 00:04:47.338 "ctrlr_loss_timeout_sec": 0, 00:04:47.338 "reconnect_delay_sec": 0, 00:04:47.338 "fast_io_fail_timeout_sec": 0, 00:04:47.338 "disable_auto_failback": false, 00:04:47.338 "generate_uuids": false, 00:04:47.338 "transport_tos": 0, 00:04:47.338 "nvme_error_stat": false, 00:04:47.338 "rdma_srq_size": 0, 00:04:47.338 "io_path_stat": false, 00:04:47.338 "allow_accel_sequence": false, 00:04:47.338 "rdma_max_cq_size": 0, 00:04:47.338 "rdma_cm_event_timeout_ms": 0, 00:04:47.338 "dhchap_digests": [ 00:04:47.338 "sha256", 00:04:47.338 "sha384", 00:04:47.338 "sha512" 00:04:47.338 ], 00:04:47.338 "dhchap_dhgroups": [ 00:04:47.338 "null", 00:04:47.338 "ffdhe2048", 00:04:47.338 "ffdhe3072", 00:04:47.338 "ffdhe4096", 00:04:47.338 "ffdhe6144", 00:04:47.338 "ffdhe8192" 00:04:47.338 ] 00:04:47.338 } 00:04:47.338 }, 00:04:47.338 { 00:04:47.338 "method": "bdev_nvme_set_hotplug", 00:04:47.338 "params": { 00:04:47.338 "period_us": 100000, 00:04:47.338 "enable": false 00:04:47.338 } 00:04:47.338 }, 00:04:47.338 { 00:04:47.338 "method": "bdev_wait_for_examine" 00:04:47.338 } 00:04:47.338 ] 00:04:47.338 }, 00:04:47.338 { 00:04:47.338 "subsystem": "scsi", 00:04:47.338 "config": null 00:04:47.338 }, 00:04:47.338 { 00:04:47.338 "subsystem": "scheduler", 00:04:47.338 "config": [ 00:04:47.338 { 00:04:47.338 "method": "framework_set_scheduler", 00:04:47.338 "params": { 00:04:47.338 "name": "static" 00:04:47.338 } 00:04:47.338 } 00:04:47.338 ] 00:04:47.338 }, 00:04:47.338 { 00:04:47.338 "subsystem": "vhost_scsi", 00:04:47.338 "config": [] 00:04:47.338 }, 00:04:47.338 { 00:04:47.338 "subsystem": "vhost_blk", 00:04:47.338 "config": [] 00:04:47.338 }, 00:04:47.338 { 00:04:47.338 "subsystem": "ublk", 00:04:47.338 "config": [] 00:04:47.338 }, 00:04:47.338 { 00:04:47.338 "subsystem": "nbd", 00:04:47.338 "config": [] 00:04:47.338 }, 00:04:47.338 { 00:04:47.338 "subsystem": "nvmf", 00:04:47.338 "config": [ 00:04:47.338 { 00:04:47.338 "method": "nvmf_set_config", 00:04:47.338 "params": { 00:04:47.338 "discovery_filter": "match_any", 00:04:47.338 "admin_cmd_passthru": { 00:04:47.338 "identify_ctrlr": false 00:04:47.338 }, 00:04:47.338 "dhchap_digests": [ 00:04:47.338 "sha256", 00:04:47.338 "sha384", 00:04:47.338 "sha512" 00:04:47.338 ], 00:04:47.338 "dhchap_dhgroups": [ 00:04:47.338 "null", 00:04:47.338 "ffdhe2048", 00:04:47.338 "ffdhe3072", 00:04:47.338 "ffdhe4096", 00:04:47.338 "ffdhe6144", 00:04:47.338 "ffdhe8192" 00:04:47.338 ] 00:04:47.338 } 00:04:47.338 }, 00:04:47.338 { 00:04:47.338 "method": "nvmf_set_max_subsystems", 00:04:47.338 "params": { 00:04:47.338 "max_subsystems": 1024 00:04:47.338 } 00:04:47.338 }, 00:04:47.338 { 00:04:47.338 "method": "nvmf_set_crdt", 00:04:47.338 "params": { 00:04:47.338 "crdt1": 0, 00:04:47.338 "crdt2": 0, 00:04:47.338 "crdt3": 0 00:04:47.338 } 00:04:47.338 }, 00:04:47.338 { 00:04:47.338 "method": "nvmf_create_transport", 00:04:47.338 "params": { 00:04:47.338 "trtype": "TCP", 00:04:47.338 "max_queue_depth": 128, 00:04:47.338 "max_io_qpairs_per_ctrlr": 127, 00:04:47.338 "in_capsule_data_size": 4096, 00:04:47.338 "max_io_size": 131072, 00:04:47.338 "io_unit_size": 131072, 00:04:47.338 "max_aq_depth": 128, 00:04:47.338 "num_shared_buffers": 511, 00:04:47.338 "buf_cache_size": 4294967295, 00:04:47.338 "dif_insert_or_strip": false, 00:04:47.338 "zcopy": false, 00:04:47.338 "c2h_success": true, 00:04:47.338 "sock_priority": 0, 00:04:47.338 "abort_timeout_sec": 1, 00:04:47.338 "ack_timeout": 0, 00:04:47.338 "data_wr_pool_size": 0 00:04:47.338 } 00:04:47.338 } 00:04:47.338 ] 00:04:47.338 }, 00:04:47.338 { 00:04:47.338 "subsystem": "iscsi", 00:04:47.338 "config": [ 00:04:47.338 { 00:04:47.338 "method": "iscsi_set_options", 00:04:47.338 "params": { 00:04:47.338 "node_base": "iqn.2016-06.io.spdk", 00:04:47.338 "max_sessions": 128, 00:04:47.338 "max_connections_per_session": 2, 00:04:47.338 "max_queue_depth": 64, 00:04:47.338 "default_time2wait": 2, 00:04:47.338 "default_time2retain": 20, 00:04:47.338 "first_burst_length": 8192, 00:04:47.338 "immediate_data": true, 00:04:47.338 "allow_duplicated_isid": false, 00:04:47.338 "error_recovery_level": 0, 00:04:47.338 "nop_timeout": 60, 00:04:47.338 "nop_in_interval": 30, 00:04:47.338 "disable_chap": false, 00:04:47.338 "require_chap": false, 00:04:47.338 "mutual_chap": false, 00:04:47.338 "chap_group": 0, 00:04:47.338 "max_large_datain_per_connection": 64, 00:04:47.338 "max_r2t_per_connection": 4, 00:04:47.338 "pdu_pool_size": 36864, 00:04:47.338 "immediate_data_pool_size": 16384, 00:04:47.338 "data_out_pool_size": 2048 00:04:47.338 } 00:04:47.338 } 00:04:47.338 ] 00:04:47.338 } 00:04:47.338 ] 00:04:47.338 } 00:04:47.338 11:56:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:47.338 11:56:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57285 00:04:47.338 11:56:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57285 ']' 00:04:47.338 11:56:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57285 00:04:47.338 11:56:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:47.338 11:56:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:47.338 11:56:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57285 00:04:47.338 killing process with pid 57285 00:04:47.338 11:56:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:47.338 11:56:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:47.338 11:56:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57285' 00:04:47.338 11:56:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57285 00:04:47.338 11:56:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57285 00:04:49.872 11:56:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57341 00:04:49.872 11:56:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:49.872 11:56:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:55.164 11:56:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57341 00:04:55.164 11:56:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57341 ']' 00:04:55.164 11:56:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57341 00:04:55.164 11:56:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:55.164 11:56:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:55.164 11:56:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57341 00:04:55.164 killing process with pid 57341 00:04:55.164 11:56:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:55.164 11:56:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:55.164 11:56:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57341' 00:04:55.164 11:56:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57341 00:04:55.164 11:56:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57341 00:04:57.072 11:57:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:57.072 11:57:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:57.072 ************************************ 00:04:57.072 END TEST skip_rpc_with_json 00:04:57.072 ************************************ 00:04:57.072 00:04:57.072 real 0m11.167s 00:04:57.072 user 0m10.649s 00:04:57.072 sys 0m0.816s 00:04:57.072 11:57:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.072 11:57:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:57.072 11:57:00 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:57.072 11:57:00 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:57.072 11:57:00 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.072 11:57:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.072 ************************************ 00:04:57.072 START TEST skip_rpc_with_delay 00:04:57.072 ************************************ 00:04:57.072 11:57:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:57.072 11:57:00 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:57.072 11:57:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:57.072 11:57:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:57.072 11:57:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:57.073 11:57:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:57.073 11:57:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:57.073 11:57:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:57.073 11:57:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:57.073 11:57:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:57.073 11:57:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:57.073 11:57:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:57.073 11:57:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:57.333 [2024-11-19 11:57:00.508732] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:57.333 11:57:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:57.333 ************************************ 00:04:57.333 END TEST skip_rpc_with_delay 00:04:57.333 ************************************ 00:04:57.333 11:57:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:57.333 11:57:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:57.333 11:57:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:57.333 00:04:57.333 real 0m0.168s 00:04:57.333 user 0m0.092s 00:04:57.333 sys 0m0.075s 00:04:57.333 11:57:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.333 11:57:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:57.333 11:57:00 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:57.333 11:57:00 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:57.333 11:57:00 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:57.333 11:57:00 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:57.333 11:57:00 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.333 11:57:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.333 ************************************ 00:04:57.333 START TEST exit_on_failed_rpc_init 00:04:57.333 ************************************ 00:04:57.333 11:57:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:57.333 11:57:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57469 00:04:57.333 11:57:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:57.333 11:57:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57469 00:04:57.333 11:57:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57469 ']' 00:04:57.333 11:57:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.333 11:57:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:57.333 11:57:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.333 11:57:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:57.333 11:57:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:57.593 [2024-11-19 11:57:00.741266] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:04:57.593 [2024-11-19 11:57:00.741479] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57469 ] 00:04:57.593 [2024-11-19 11:57:00.916960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.853 [2024-11-19 11:57:01.029805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.793 11:57:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:58.793 11:57:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:58.793 11:57:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:58.793 11:57:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:58.793 11:57:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:58.793 11:57:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:58.793 11:57:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:58.793 11:57:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:58.793 11:57:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:58.793 11:57:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:58.793 11:57:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:58.793 11:57:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:58.793 11:57:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:58.793 11:57:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:58.793 11:57:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:58.793 [2024-11-19 11:57:01.966994] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:04:58.793 [2024-11-19 11:57:01.967172] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57487 ] 00:04:58.793 [2024-11-19 11:57:02.141680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.053 [2024-11-19 11:57:02.270491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:59.053 [2024-11-19 11:57:02.270695] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:59.053 [2024-11-19 11:57:02.270760] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:59.053 [2024-11-19 11:57:02.270836] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:59.320 11:57:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:59.320 11:57:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:59.320 11:57:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:59.320 11:57:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:59.320 11:57:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:59.320 11:57:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:59.320 11:57:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:59.320 11:57:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57469 00:04:59.320 11:57:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57469 ']' 00:04:59.320 11:57:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57469 00:04:59.320 11:57:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:59.320 11:57:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:59.320 11:57:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57469 00:04:59.320 11:57:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:59.320 11:57:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:59.320 11:57:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57469' 00:04:59.320 killing process with pid 57469 00:04:59.320 11:57:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57469 00:04:59.320 11:57:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57469 00:05:01.874 00:05:01.874 real 0m4.566s 00:05:01.874 user 0m4.912s 00:05:01.874 sys 0m0.555s 00:05:01.874 11:57:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.874 11:57:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:01.874 ************************************ 00:05:01.874 END TEST exit_on_failed_rpc_init 00:05:01.874 ************************************ 00:05:02.134 11:57:05 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:02.134 ************************************ 00:05:02.134 END TEST skip_rpc 00:05:02.134 ************************************ 00:05:02.134 00:05:02.134 real 0m23.796s 00:05:02.134 user 0m22.835s 00:05:02.134 sys 0m2.090s 00:05:02.134 11:57:05 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.134 11:57:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.134 11:57:05 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:02.134 11:57:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:02.134 11:57:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.134 11:57:05 -- common/autotest_common.sh@10 -- # set +x 00:05:02.134 ************************************ 00:05:02.134 START TEST rpc_client 00:05:02.134 ************************************ 00:05:02.134 11:57:05 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:02.134 * Looking for test storage... 00:05:02.134 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:02.134 11:57:05 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:02.134 11:57:05 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:05:02.134 11:57:05 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:02.394 11:57:05 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:02.394 11:57:05 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:02.394 11:57:05 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:02.394 11:57:05 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:02.394 11:57:05 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:02.394 11:57:05 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:02.394 11:57:05 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:02.394 11:57:05 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:02.394 11:57:05 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:02.394 11:57:05 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:02.394 11:57:05 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:02.394 11:57:05 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:02.394 11:57:05 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:02.394 11:57:05 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:02.394 11:57:05 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:02.394 11:57:05 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:02.394 11:57:05 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:02.394 11:57:05 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:02.394 11:57:05 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:02.394 11:57:05 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:02.394 11:57:05 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:02.394 11:57:05 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:02.394 11:57:05 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:02.394 11:57:05 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:02.394 11:57:05 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:02.394 11:57:05 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:02.394 11:57:05 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:02.394 11:57:05 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:02.394 11:57:05 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:02.394 11:57:05 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:02.394 11:57:05 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:02.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.394 --rc genhtml_branch_coverage=1 00:05:02.394 --rc genhtml_function_coverage=1 00:05:02.394 --rc genhtml_legend=1 00:05:02.394 --rc geninfo_all_blocks=1 00:05:02.394 --rc geninfo_unexecuted_blocks=1 00:05:02.394 00:05:02.394 ' 00:05:02.394 11:57:05 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:02.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.394 --rc genhtml_branch_coverage=1 00:05:02.394 --rc genhtml_function_coverage=1 00:05:02.394 --rc genhtml_legend=1 00:05:02.394 --rc geninfo_all_blocks=1 00:05:02.394 --rc geninfo_unexecuted_blocks=1 00:05:02.394 00:05:02.394 ' 00:05:02.394 11:57:05 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:02.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.394 --rc genhtml_branch_coverage=1 00:05:02.394 --rc genhtml_function_coverage=1 00:05:02.394 --rc genhtml_legend=1 00:05:02.394 --rc geninfo_all_blocks=1 00:05:02.394 --rc geninfo_unexecuted_blocks=1 00:05:02.394 00:05:02.394 ' 00:05:02.394 11:57:05 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:02.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.394 --rc genhtml_branch_coverage=1 00:05:02.394 --rc genhtml_function_coverage=1 00:05:02.394 --rc genhtml_legend=1 00:05:02.394 --rc geninfo_all_blocks=1 00:05:02.394 --rc geninfo_unexecuted_blocks=1 00:05:02.394 00:05:02.394 ' 00:05:02.394 11:57:05 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:02.394 OK 00:05:02.394 11:57:05 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:02.394 00:05:02.394 real 0m0.284s 00:05:02.394 user 0m0.133s 00:05:02.394 sys 0m0.166s 00:05:02.394 11:57:05 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.394 11:57:05 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:02.394 ************************************ 00:05:02.394 END TEST rpc_client 00:05:02.394 ************************************ 00:05:02.394 11:57:05 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:02.394 11:57:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:02.394 11:57:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.394 11:57:05 -- common/autotest_common.sh@10 -- # set +x 00:05:02.394 ************************************ 00:05:02.394 START TEST json_config 00:05:02.394 ************************************ 00:05:02.395 11:57:05 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:02.654 11:57:05 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:02.654 11:57:05 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:05:02.654 11:57:05 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:02.654 11:57:05 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:02.654 11:57:05 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:02.654 11:57:05 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:02.654 11:57:05 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:02.654 11:57:05 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:02.654 11:57:05 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:02.654 11:57:05 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:02.654 11:57:05 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:02.654 11:57:05 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:02.654 11:57:05 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:02.654 11:57:05 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:02.654 11:57:05 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:02.654 11:57:05 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:02.654 11:57:05 json_config -- scripts/common.sh@345 -- # : 1 00:05:02.654 11:57:05 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:02.655 11:57:05 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:02.655 11:57:05 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:02.655 11:57:05 json_config -- scripts/common.sh@353 -- # local d=1 00:05:02.655 11:57:05 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:02.655 11:57:05 json_config -- scripts/common.sh@355 -- # echo 1 00:05:02.655 11:57:05 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:02.655 11:57:05 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:02.655 11:57:05 json_config -- scripts/common.sh@353 -- # local d=2 00:05:02.655 11:57:05 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:02.655 11:57:05 json_config -- scripts/common.sh@355 -- # echo 2 00:05:02.655 11:57:05 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:02.655 11:57:05 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:02.655 11:57:05 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:02.655 11:57:05 json_config -- scripts/common.sh@368 -- # return 0 00:05:02.655 11:57:05 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:02.655 11:57:05 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:02.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.655 --rc genhtml_branch_coverage=1 00:05:02.655 --rc genhtml_function_coverage=1 00:05:02.655 --rc genhtml_legend=1 00:05:02.655 --rc geninfo_all_blocks=1 00:05:02.655 --rc geninfo_unexecuted_blocks=1 00:05:02.655 00:05:02.655 ' 00:05:02.655 11:57:05 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:02.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.655 --rc genhtml_branch_coverage=1 00:05:02.655 --rc genhtml_function_coverage=1 00:05:02.655 --rc genhtml_legend=1 00:05:02.655 --rc geninfo_all_blocks=1 00:05:02.655 --rc geninfo_unexecuted_blocks=1 00:05:02.655 00:05:02.655 ' 00:05:02.655 11:57:05 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:02.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.655 --rc genhtml_branch_coverage=1 00:05:02.655 --rc genhtml_function_coverage=1 00:05:02.655 --rc genhtml_legend=1 00:05:02.655 --rc geninfo_all_blocks=1 00:05:02.655 --rc geninfo_unexecuted_blocks=1 00:05:02.655 00:05:02.655 ' 00:05:02.655 11:57:05 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:02.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.655 --rc genhtml_branch_coverage=1 00:05:02.655 --rc genhtml_function_coverage=1 00:05:02.655 --rc genhtml_legend=1 00:05:02.655 --rc geninfo_all_blocks=1 00:05:02.655 --rc geninfo_unexecuted_blocks=1 00:05:02.655 00:05:02.655 ' 00:05:02.655 11:57:05 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:02.655 11:57:05 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:02.655 11:57:05 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:02.655 11:57:05 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:02.655 11:57:05 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:02.655 11:57:05 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:02.655 11:57:05 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:02.655 11:57:05 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:02.655 11:57:05 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:02.655 11:57:05 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:02.655 11:57:05 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:02.655 11:57:05 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:02.655 11:57:05 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fc16503-8259-4497-8462-7e6e8faaef14 00:05:02.655 11:57:05 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=2fc16503-8259-4497-8462-7e6e8faaef14 00:05:02.655 11:57:05 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:02.655 11:57:05 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:02.655 11:57:05 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:02.655 11:57:05 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:02.655 11:57:05 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:02.655 11:57:05 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:02.655 11:57:05 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:02.655 11:57:05 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:02.655 11:57:05 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:02.655 11:57:05 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.655 11:57:05 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.655 11:57:05 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.655 11:57:05 json_config -- paths/export.sh@5 -- # export PATH 00:05:02.655 11:57:05 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.655 11:57:05 json_config -- nvmf/common.sh@51 -- # : 0 00:05:02.655 11:57:05 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:02.655 11:57:05 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:02.655 11:57:05 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:02.655 11:57:05 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:02.655 11:57:05 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:02.655 11:57:05 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:02.655 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:02.655 11:57:05 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:02.655 11:57:05 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:02.655 11:57:05 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:02.655 11:57:05 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:02.655 11:57:05 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:02.655 11:57:05 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:02.655 11:57:05 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:02.655 11:57:05 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:02.655 11:57:05 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:02.655 WARNING: No tests are enabled so not running JSON configuration tests 00:05:02.655 11:57:05 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:02.655 00:05:02.655 real 0m0.227s 00:05:02.655 user 0m0.135s 00:05:02.655 sys 0m0.095s 00:05:02.655 11:57:05 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.655 11:57:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:02.655 ************************************ 00:05:02.655 END TEST json_config 00:05:02.655 ************************************ 00:05:02.655 11:57:05 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:02.655 11:57:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:02.655 11:57:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.655 11:57:05 -- common/autotest_common.sh@10 -- # set +x 00:05:02.655 ************************************ 00:05:02.655 START TEST json_config_extra_key 00:05:02.655 ************************************ 00:05:02.655 11:57:05 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:02.916 11:57:06 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:02.916 11:57:06 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:05:02.916 11:57:06 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:02.916 11:57:06 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:02.916 11:57:06 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:02.916 11:57:06 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:02.916 11:57:06 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:02.916 11:57:06 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:02.916 11:57:06 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:02.916 11:57:06 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:02.916 11:57:06 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:02.916 11:57:06 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:02.916 11:57:06 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:02.916 11:57:06 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:02.916 11:57:06 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:02.916 11:57:06 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:02.916 11:57:06 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:02.916 11:57:06 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:02.916 11:57:06 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:02.916 11:57:06 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:02.916 11:57:06 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:02.916 11:57:06 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:02.916 11:57:06 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:02.916 11:57:06 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:02.916 11:57:06 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:02.916 11:57:06 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:02.916 11:57:06 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:02.916 11:57:06 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:02.916 11:57:06 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:02.916 11:57:06 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:02.916 11:57:06 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:02.916 11:57:06 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:02.916 11:57:06 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:02.916 11:57:06 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:02.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.916 --rc genhtml_branch_coverage=1 00:05:02.916 --rc genhtml_function_coverage=1 00:05:02.916 --rc genhtml_legend=1 00:05:02.916 --rc geninfo_all_blocks=1 00:05:02.916 --rc geninfo_unexecuted_blocks=1 00:05:02.916 00:05:02.916 ' 00:05:02.916 11:57:06 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:02.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.916 --rc genhtml_branch_coverage=1 00:05:02.916 --rc genhtml_function_coverage=1 00:05:02.916 --rc genhtml_legend=1 00:05:02.916 --rc geninfo_all_blocks=1 00:05:02.916 --rc geninfo_unexecuted_blocks=1 00:05:02.916 00:05:02.916 ' 00:05:02.916 11:57:06 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:02.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.916 --rc genhtml_branch_coverage=1 00:05:02.916 --rc genhtml_function_coverage=1 00:05:02.916 --rc genhtml_legend=1 00:05:02.916 --rc geninfo_all_blocks=1 00:05:02.916 --rc geninfo_unexecuted_blocks=1 00:05:02.916 00:05:02.916 ' 00:05:02.916 11:57:06 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:02.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.916 --rc genhtml_branch_coverage=1 00:05:02.916 --rc genhtml_function_coverage=1 00:05:02.916 --rc genhtml_legend=1 00:05:02.916 --rc geninfo_all_blocks=1 00:05:02.916 --rc geninfo_unexecuted_blocks=1 00:05:02.916 00:05:02.916 ' 00:05:02.916 11:57:06 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:02.916 11:57:06 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:02.916 11:57:06 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:02.916 11:57:06 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:02.916 11:57:06 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:02.916 11:57:06 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:02.916 11:57:06 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:02.916 11:57:06 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:02.916 11:57:06 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:02.916 11:57:06 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:02.916 11:57:06 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:02.916 11:57:06 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:02.916 11:57:06 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fc16503-8259-4497-8462-7e6e8faaef14 00:05:02.916 11:57:06 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=2fc16503-8259-4497-8462-7e6e8faaef14 00:05:02.916 11:57:06 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:02.916 11:57:06 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:02.916 11:57:06 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:02.916 11:57:06 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:02.916 11:57:06 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:02.916 11:57:06 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:02.916 11:57:06 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:02.916 11:57:06 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:02.916 11:57:06 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:02.916 11:57:06 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.916 11:57:06 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.916 11:57:06 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.916 11:57:06 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:02.916 11:57:06 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.916 11:57:06 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:02.916 11:57:06 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:02.916 11:57:06 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:02.916 11:57:06 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:02.916 11:57:06 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:02.916 11:57:06 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:02.916 11:57:06 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:02.916 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:02.916 11:57:06 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:02.916 11:57:06 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:02.916 11:57:06 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:02.916 11:57:06 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:02.916 11:57:06 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:02.916 11:57:06 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:02.916 11:57:06 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:02.916 11:57:06 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:02.916 11:57:06 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:02.916 11:57:06 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:02.916 11:57:06 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:02.917 11:57:06 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:02.917 11:57:06 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:02.917 11:57:06 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:02.917 INFO: launching applications... 00:05:02.917 11:57:06 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:02.917 11:57:06 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:02.917 11:57:06 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:02.917 11:57:06 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:02.917 11:57:06 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:02.917 11:57:06 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:02.917 11:57:06 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:02.917 11:57:06 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:02.917 11:57:06 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57703 00:05:02.917 11:57:06 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:02.917 Waiting for target to run... 00:05:02.917 11:57:06 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:02.917 11:57:06 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57703 /var/tmp/spdk_tgt.sock 00:05:02.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:02.917 11:57:06 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57703 ']' 00:05:02.917 11:57:06 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:02.917 11:57:06 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:02.917 11:57:06 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:02.917 11:57:06 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:02.917 11:57:06 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:03.176 [2024-11-19 11:57:06.303680] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:05:03.176 [2024-11-19 11:57:06.303805] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57703 ] 00:05:03.436 [2024-11-19 11:57:06.696655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.695 [2024-11-19 11:57:06.820701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.264 00:05:04.264 INFO: shutting down applications... 00:05:04.264 11:57:07 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:04.264 11:57:07 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:04.264 11:57:07 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:04.264 11:57:07 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:04.264 11:57:07 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:04.264 11:57:07 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:04.264 11:57:07 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:04.264 11:57:07 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57703 ]] 00:05:04.264 11:57:07 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57703 00:05:04.264 11:57:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:04.264 11:57:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:04.264 11:57:07 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57703 00:05:04.264 11:57:07 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:04.845 11:57:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:04.845 11:57:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:04.845 11:57:08 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57703 00:05:04.845 11:57:08 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:05.441 11:57:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:05.441 11:57:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:05.441 11:57:08 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57703 00:05:05.441 11:57:08 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:05.701 11:57:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:05.701 11:57:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:05.701 11:57:09 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57703 00:05:05.701 11:57:09 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:06.283 11:57:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:06.283 11:57:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:06.283 11:57:09 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57703 00:05:06.283 11:57:09 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:06.852 11:57:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:06.852 11:57:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:06.852 11:57:10 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57703 00:05:06.852 11:57:10 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:07.421 11:57:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:07.421 11:57:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:07.421 11:57:10 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57703 00:05:07.421 11:57:10 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:07.421 11:57:10 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:07.421 11:57:10 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:07.421 SPDK target shutdown done 00:05:07.421 Success 00:05:07.421 11:57:10 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:07.421 11:57:10 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:07.421 ************************************ 00:05:07.421 END TEST json_config_extra_key 00:05:07.421 ************************************ 00:05:07.421 00:05:07.421 real 0m4.618s 00:05:07.421 user 0m4.274s 00:05:07.421 sys 0m0.632s 00:05:07.421 11:57:10 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:07.421 11:57:10 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:07.421 11:57:10 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:07.421 11:57:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:07.421 11:57:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:07.421 11:57:10 -- common/autotest_common.sh@10 -- # set +x 00:05:07.421 ************************************ 00:05:07.421 START TEST alias_rpc 00:05:07.421 ************************************ 00:05:07.421 11:57:10 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:07.421 * Looking for test storage... 00:05:07.682 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:07.682 11:57:10 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:07.682 11:57:10 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:07.682 11:57:10 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:07.682 11:57:10 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:07.682 11:57:10 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:07.682 11:57:10 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:07.682 11:57:10 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:07.682 11:57:10 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:07.682 11:57:10 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:07.682 11:57:10 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:07.682 11:57:10 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:07.682 11:57:10 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:07.682 11:57:10 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:07.682 11:57:10 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:07.682 11:57:10 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:07.682 11:57:10 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:07.682 11:57:10 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:07.682 11:57:10 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:07.682 11:57:10 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:07.682 11:57:10 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:07.682 11:57:10 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:07.682 11:57:10 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:07.682 11:57:10 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:07.682 11:57:10 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:07.682 11:57:10 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:07.682 11:57:10 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:07.682 11:57:10 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:07.682 11:57:10 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:07.682 11:57:10 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:07.682 11:57:10 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:07.682 11:57:10 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:07.682 11:57:10 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:07.682 11:57:10 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:07.682 11:57:10 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:07.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.682 --rc genhtml_branch_coverage=1 00:05:07.682 --rc genhtml_function_coverage=1 00:05:07.682 --rc genhtml_legend=1 00:05:07.682 --rc geninfo_all_blocks=1 00:05:07.682 --rc geninfo_unexecuted_blocks=1 00:05:07.682 00:05:07.682 ' 00:05:07.682 11:57:10 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:07.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.682 --rc genhtml_branch_coverage=1 00:05:07.682 --rc genhtml_function_coverage=1 00:05:07.682 --rc genhtml_legend=1 00:05:07.682 --rc geninfo_all_blocks=1 00:05:07.682 --rc geninfo_unexecuted_blocks=1 00:05:07.682 00:05:07.682 ' 00:05:07.682 11:57:10 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:07.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.682 --rc genhtml_branch_coverage=1 00:05:07.682 --rc genhtml_function_coverage=1 00:05:07.682 --rc genhtml_legend=1 00:05:07.682 --rc geninfo_all_blocks=1 00:05:07.682 --rc geninfo_unexecuted_blocks=1 00:05:07.682 00:05:07.682 ' 00:05:07.682 11:57:10 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:07.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.682 --rc genhtml_branch_coverage=1 00:05:07.682 --rc genhtml_function_coverage=1 00:05:07.682 --rc genhtml_legend=1 00:05:07.682 --rc geninfo_all_blocks=1 00:05:07.682 --rc geninfo_unexecuted_blocks=1 00:05:07.682 00:05:07.682 ' 00:05:07.682 11:57:10 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:07.682 11:57:10 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57814 00:05:07.682 11:57:10 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:07.682 11:57:10 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57814 00:05:07.682 11:57:10 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57814 ']' 00:05:07.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.682 11:57:10 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.682 11:57:10 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:07.682 11:57:10 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.682 11:57:10 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:07.682 11:57:10 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.682 [2024-11-19 11:57:11.002881] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:05:07.682 [2024-11-19 11:57:11.003131] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57814 ] 00:05:07.942 [2024-11-19 11:57:11.178201] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.202 [2024-11-19 11:57:11.320278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.140 11:57:12 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:09.140 11:57:12 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:09.140 11:57:12 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:09.400 11:57:12 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57814 00:05:09.400 11:57:12 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57814 ']' 00:05:09.400 11:57:12 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57814 00:05:09.400 11:57:12 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:09.400 11:57:12 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:09.400 11:57:12 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57814 00:05:09.400 11:57:12 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:09.400 killing process with pid 57814 00:05:09.400 11:57:12 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:09.400 11:57:12 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57814' 00:05:09.400 11:57:12 alias_rpc -- common/autotest_common.sh@973 -- # kill 57814 00:05:09.400 11:57:12 alias_rpc -- common/autotest_common.sh@978 -- # wait 57814 00:05:11.942 ************************************ 00:05:11.942 END TEST alias_rpc 00:05:11.942 ************************************ 00:05:11.942 00:05:11.942 real 0m4.532s 00:05:11.942 user 0m4.327s 00:05:11.942 sys 0m0.734s 00:05:11.942 11:57:15 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.942 11:57:15 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.942 11:57:15 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:11.942 11:57:15 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:11.942 11:57:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:11.942 11:57:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.942 11:57:15 -- common/autotest_common.sh@10 -- # set +x 00:05:11.942 ************************************ 00:05:11.942 START TEST spdkcli_tcp 00:05:11.942 ************************************ 00:05:11.942 11:57:15 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:12.201 * Looking for test storage... 00:05:12.201 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:12.201 11:57:15 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:12.201 11:57:15 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:12.201 11:57:15 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:12.201 11:57:15 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:12.201 11:57:15 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:12.201 11:57:15 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:12.201 11:57:15 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:12.201 11:57:15 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:12.201 11:57:15 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:12.201 11:57:15 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:12.201 11:57:15 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:12.201 11:57:15 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:12.201 11:57:15 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:12.201 11:57:15 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:12.201 11:57:15 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:12.201 11:57:15 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:12.201 11:57:15 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:12.201 11:57:15 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:12.201 11:57:15 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:12.201 11:57:15 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:12.201 11:57:15 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:12.201 11:57:15 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:12.201 11:57:15 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:12.202 11:57:15 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:12.202 11:57:15 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:12.202 11:57:15 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:12.202 11:57:15 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:12.202 11:57:15 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:12.202 11:57:15 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:12.202 11:57:15 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:12.202 11:57:15 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:12.202 11:57:15 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:12.202 11:57:15 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:12.202 11:57:15 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:12.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.202 --rc genhtml_branch_coverage=1 00:05:12.202 --rc genhtml_function_coverage=1 00:05:12.202 --rc genhtml_legend=1 00:05:12.202 --rc geninfo_all_blocks=1 00:05:12.202 --rc geninfo_unexecuted_blocks=1 00:05:12.202 00:05:12.202 ' 00:05:12.202 11:57:15 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:12.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.202 --rc genhtml_branch_coverage=1 00:05:12.202 --rc genhtml_function_coverage=1 00:05:12.202 --rc genhtml_legend=1 00:05:12.202 --rc geninfo_all_blocks=1 00:05:12.202 --rc geninfo_unexecuted_blocks=1 00:05:12.202 00:05:12.202 ' 00:05:12.202 11:57:15 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:12.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.202 --rc genhtml_branch_coverage=1 00:05:12.202 --rc genhtml_function_coverage=1 00:05:12.202 --rc genhtml_legend=1 00:05:12.202 --rc geninfo_all_blocks=1 00:05:12.202 --rc geninfo_unexecuted_blocks=1 00:05:12.202 00:05:12.202 ' 00:05:12.202 11:57:15 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:12.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.202 --rc genhtml_branch_coverage=1 00:05:12.202 --rc genhtml_function_coverage=1 00:05:12.202 --rc genhtml_legend=1 00:05:12.202 --rc geninfo_all_blocks=1 00:05:12.202 --rc geninfo_unexecuted_blocks=1 00:05:12.202 00:05:12.202 ' 00:05:12.202 11:57:15 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:12.202 11:57:15 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:12.202 11:57:15 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:12.202 11:57:15 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:12.202 11:57:15 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:12.202 11:57:15 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:12.202 11:57:15 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:12.202 11:57:15 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:12.202 11:57:15 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:12.202 11:57:15 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57927 00:05:12.202 11:57:15 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:12.202 11:57:15 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57927 00:05:12.202 11:57:15 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57927 ']' 00:05:12.202 11:57:15 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.202 11:57:15 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:12.202 11:57:15 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.202 11:57:15 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:12.202 11:57:15 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:12.462 [2024-11-19 11:57:15.604732] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:05:12.462 [2024-11-19 11:57:15.604941] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57927 ] 00:05:12.462 [2024-11-19 11:57:15.760693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:12.723 [2024-11-19 11:57:15.902543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.723 [2024-11-19 11:57:15.902585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.662 11:57:16 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:13.662 11:57:16 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:13.662 11:57:16 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:13.662 11:57:16 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57948 00:05:13.662 11:57:16 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:13.921 [ 00:05:13.921 "bdev_malloc_delete", 00:05:13.921 "bdev_malloc_create", 00:05:13.921 "bdev_null_resize", 00:05:13.921 "bdev_null_delete", 00:05:13.921 "bdev_null_create", 00:05:13.921 "bdev_nvme_cuse_unregister", 00:05:13.921 "bdev_nvme_cuse_register", 00:05:13.921 "bdev_opal_new_user", 00:05:13.921 "bdev_opal_set_lock_state", 00:05:13.921 "bdev_opal_delete", 00:05:13.921 "bdev_opal_get_info", 00:05:13.921 "bdev_opal_create", 00:05:13.921 "bdev_nvme_opal_revert", 00:05:13.921 "bdev_nvme_opal_init", 00:05:13.921 "bdev_nvme_send_cmd", 00:05:13.921 "bdev_nvme_set_keys", 00:05:13.921 "bdev_nvme_get_path_iostat", 00:05:13.921 "bdev_nvme_get_mdns_discovery_info", 00:05:13.921 "bdev_nvme_stop_mdns_discovery", 00:05:13.921 "bdev_nvme_start_mdns_discovery", 00:05:13.921 "bdev_nvme_set_multipath_policy", 00:05:13.921 "bdev_nvme_set_preferred_path", 00:05:13.921 "bdev_nvme_get_io_paths", 00:05:13.921 "bdev_nvme_remove_error_injection", 00:05:13.921 "bdev_nvme_add_error_injection", 00:05:13.921 "bdev_nvme_get_discovery_info", 00:05:13.921 "bdev_nvme_stop_discovery", 00:05:13.921 "bdev_nvme_start_discovery", 00:05:13.921 "bdev_nvme_get_controller_health_info", 00:05:13.921 "bdev_nvme_disable_controller", 00:05:13.921 "bdev_nvme_enable_controller", 00:05:13.921 "bdev_nvme_reset_controller", 00:05:13.921 "bdev_nvme_get_transport_statistics", 00:05:13.921 "bdev_nvme_apply_firmware", 00:05:13.921 "bdev_nvme_detach_controller", 00:05:13.921 "bdev_nvme_get_controllers", 00:05:13.921 "bdev_nvme_attach_controller", 00:05:13.921 "bdev_nvme_set_hotplug", 00:05:13.921 "bdev_nvme_set_options", 00:05:13.921 "bdev_passthru_delete", 00:05:13.921 "bdev_passthru_create", 00:05:13.921 "bdev_lvol_set_parent_bdev", 00:05:13.922 "bdev_lvol_set_parent", 00:05:13.922 "bdev_lvol_check_shallow_copy", 00:05:13.922 "bdev_lvol_start_shallow_copy", 00:05:13.922 "bdev_lvol_grow_lvstore", 00:05:13.922 "bdev_lvol_get_lvols", 00:05:13.922 "bdev_lvol_get_lvstores", 00:05:13.922 "bdev_lvol_delete", 00:05:13.922 "bdev_lvol_set_read_only", 00:05:13.922 "bdev_lvol_resize", 00:05:13.922 "bdev_lvol_decouple_parent", 00:05:13.922 "bdev_lvol_inflate", 00:05:13.922 "bdev_lvol_rename", 00:05:13.922 "bdev_lvol_clone_bdev", 00:05:13.922 "bdev_lvol_clone", 00:05:13.922 "bdev_lvol_snapshot", 00:05:13.922 "bdev_lvol_create", 00:05:13.922 "bdev_lvol_delete_lvstore", 00:05:13.922 "bdev_lvol_rename_lvstore", 00:05:13.922 "bdev_lvol_create_lvstore", 00:05:13.922 "bdev_raid_set_options", 00:05:13.922 "bdev_raid_remove_base_bdev", 00:05:13.922 "bdev_raid_add_base_bdev", 00:05:13.922 "bdev_raid_delete", 00:05:13.922 "bdev_raid_create", 00:05:13.922 "bdev_raid_get_bdevs", 00:05:13.922 "bdev_error_inject_error", 00:05:13.922 "bdev_error_delete", 00:05:13.922 "bdev_error_create", 00:05:13.922 "bdev_split_delete", 00:05:13.922 "bdev_split_create", 00:05:13.922 "bdev_delay_delete", 00:05:13.922 "bdev_delay_create", 00:05:13.922 "bdev_delay_update_latency", 00:05:13.922 "bdev_zone_block_delete", 00:05:13.922 "bdev_zone_block_create", 00:05:13.922 "blobfs_create", 00:05:13.922 "blobfs_detect", 00:05:13.922 "blobfs_set_cache_size", 00:05:13.922 "bdev_aio_delete", 00:05:13.922 "bdev_aio_rescan", 00:05:13.922 "bdev_aio_create", 00:05:13.922 "bdev_ftl_set_property", 00:05:13.922 "bdev_ftl_get_properties", 00:05:13.922 "bdev_ftl_get_stats", 00:05:13.922 "bdev_ftl_unmap", 00:05:13.922 "bdev_ftl_unload", 00:05:13.922 "bdev_ftl_delete", 00:05:13.922 "bdev_ftl_load", 00:05:13.922 "bdev_ftl_create", 00:05:13.922 "bdev_virtio_attach_controller", 00:05:13.922 "bdev_virtio_scsi_get_devices", 00:05:13.922 "bdev_virtio_detach_controller", 00:05:13.922 "bdev_virtio_blk_set_hotplug", 00:05:13.922 "bdev_iscsi_delete", 00:05:13.922 "bdev_iscsi_create", 00:05:13.922 "bdev_iscsi_set_options", 00:05:13.922 "accel_error_inject_error", 00:05:13.922 "ioat_scan_accel_module", 00:05:13.922 "dsa_scan_accel_module", 00:05:13.922 "iaa_scan_accel_module", 00:05:13.922 "keyring_file_remove_key", 00:05:13.922 "keyring_file_add_key", 00:05:13.922 "keyring_linux_set_options", 00:05:13.922 "fsdev_aio_delete", 00:05:13.922 "fsdev_aio_create", 00:05:13.922 "iscsi_get_histogram", 00:05:13.922 "iscsi_enable_histogram", 00:05:13.922 "iscsi_set_options", 00:05:13.922 "iscsi_get_auth_groups", 00:05:13.922 "iscsi_auth_group_remove_secret", 00:05:13.922 "iscsi_auth_group_add_secret", 00:05:13.922 "iscsi_delete_auth_group", 00:05:13.922 "iscsi_create_auth_group", 00:05:13.922 "iscsi_set_discovery_auth", 00:05:13.922 "iscsi_get_options", 00:05:13.922 "iscsi_target_node_request_logout", 00:05:13.922 "iscsi_target_node_set_redirect", 00:05:13.922 "iscsi_target_node_set_auth", 00:05:13.922 "iscsi_target_node_add_lun", 00:05:13.922 "iscsi_get_stats", 00:05:13.922 "iscsi_get_connections", 00:05:13.922 "iscsi_portal_group_set_auth", 00:05:13.922 "iscsi_start_portal_group", 00:05:13.922 "iscsi_delete_portal_group", 00:05:13.922 "iscsi_create_portal_group", 00:05:13.922 "iscsi_get_portal_groups", 00:05:13.922 "iscsi_delete_target_node", 00:05:13.922 "iscsi_target_node_remove_pg_ig_maps", 00:05:13.922 "iscsi_target_node_add_pg_ig_maps", 00:05:13.922 "iscsi_create_target_node", 00:05:13.922 "iscsi_get_target_nodes", 00:05:13.922 "iscsi_delete_initiator_group", 00:05:13.922 "iscsi_initiator_group_remove_initiators", 00:05:13.922 "iscsi_initiator_group_add_initiators", 00:05:13.922 "iscsi_create_initiator_group", 00:05:13.922 "iscsi_get_initiator_groups", 00:05:13.922 "nvmf_set_crdt", 00:05:13.922 "nvmf_set_config", 00:05:13.922 "nvmf_set_max_subsystems", 00:05:13.922 "nvmf_stop_mdns_prr", 00:05:13.922 "nvmf_publish_mdns_prr", 00:05:13.922 "nvmf_subsystem_get_listeners", 00:05:13.922 "nvmf_subsystem_get_qpairs", 00:05:13.922 "nvmf_subsystem_get_controllers", 00:05:13.922 "nvmf_get_stats", 00:05:13.922 "nvmf_get_transports", 00:05:13.922 "nvmf_create_transport", 00:05:13.922 "nvmf_get_targets", 00:05:13.922 "nvmf_delete_target", 00:05:13.922 "nvmf_create_target", 00:05:13.922 "nvmf_subsystem_allow_any_host", 00:05:13.922 "nvmf_subsystem_set_keys", 00:05:13.922 "nvmf_subsystem_remove_host", 00:05:13.922 "nvmf_subsystem_add_host", 00:05:13.922 "nvmf_ns_remove_host", 00:05:13.922 "nvmf_ns_add_host", 00:05:13.922 "nvmf_subsystem_remove_ns", 00:05:13.922 "nvmf_subsystem_set_ns_ana_group", 00:05:13.922 "nvmf_subsystem_add_ns", 00:05:13.922 "nvmf_subsystem_listener_set_ana_state", 00:05:13.922 "nvmf_discovery_get_referrals", 00:05:13.922 "nvmf_discovery_remove_referral", 00:05:13.922 "nvmf_discovery_add_referral", 00:05:13.922 "nvmf_subsystem_remove_listener", 00:05:13.922 "nvmf_subsystem_add_listener", 00:05:13.922 "nvmf_delete_subsystem", 00:05:13.922 "nvmf_create_subsystem", 00:05:13.922 "nvmf_get_subsystems", 00:05:13.922 "env_dpdk_get_mem_stats", 00:05:13.922 "nbd_get_disks", 00:05:13.922 "nbd_stop_disk", 00:05:13.922 "nbd_start_disk", 00:05:13.922 "ublk_recover_disk", 00:05:13.922 "ublk_get_disks", 00:05:13.922 "ublk_stop_disk", 00:05:13.922 "ublk_start_disk", 00:05:13.922 "ublk_destroy_target", 00:05:13.922 "ublk_create_target", 00:05:13.922 "virtio_blk_create_transport", 00:05:13.922 "virtio_blk_get_transports", 00:05:13.922 "vhost_controller_set_coalescing", 00:05:13.922 "vhost_get_controllers", 00:05:13.922 "vhost_delete_controller", 00:05:13.922 "vhost_create_blk_controller", 00:05:13.922 "vhost_scsi_controller_remove_target", 00:05:13.922 "vhost_scsi_controller_add_target", 00:05:13.922 "vhost_start_scsi_controller", 00:05:13.922 "vhost_create_scsi_controller", 00:05:13.922 "thread_set_cpumask", 00:05:13.922 "scheduler_set_options", 00:05:13.922 "framework_get_governor", 00:05:13.922 "framework_get_scheduler", 00:05:13.922 "framework_set_scheduler", 00:05:13.922 "framework_get_reactors", 00:05:13.922 "thread_get_io_channels", 00:05:13.922 "thread_get_pollers", 00:05:13.922 "thread_get_stats", 00:05:13.922 "framework_monitor_context_switch", 00:05:13.922 "spdk_kill_instance", 00:05:13.922 "log_enable_timestamps", 00:05:13.922 "log_get_flags", 00:05:13.922 "log_clear_flag", 00:05:13.922 "log_set_flag", 00:05:13.922 "log_get_level", 00:05:13.922 "log_set_level", 00:05:13.922 "log_get_print_level", 00:05:13.922 "log_set_print_level", 00:05:13.922 "framework_enable_cpumask_locks", 00:05:13.922 "framework_disable_cpumask_locks", 00:05:13.922 "framework_wait_init", 00:05:13.922 "framework_start_init", 00:05:13.922 "scsi_get_devices", 00:05:13.922 "bdev_get_histogram", 00:05:13.922 "bdev_enable_histogram", 00:05:13.922 "bdev_set_qos_limit", 00:05:13.922 "bdev_set_qd_sampling_period", 00:05:13.922 "bdev_get_bdevs", 00:05:13.922 "bdev_reset_iostat", 00:05:13.922 "bdev_get_iostat", 00:05:13.922 "bdev_examine", 00:05:13.922 "bdev_wait_for_examine", 00:05:13.922 "bdev_set_options", 00:05:13.922 "accel_get_stats", 00:05:13.922 "accel_set_options", 00:05:13.922 "accel_set_driver", 00:05:13.922 "accel_crypto_key_destroy", 00:05:13.922 "accel_crypto_keys_get", 00:05:13.922 "accel_crypto_key_create", 00:05:13.922 "accel_assign_opc", 00:05:13.922 "accel_get_module_info", 00:05:13.922 "accel_get_opc_assignments", 00:05:13.922 "vmd_rescan", 00:05:13.922 "vmd_remove_device", 00:05:13.922 "vmd_enable", 00:05:13.922 "sock_get_default_impl", 00:05:13.922 "sock_set_default_impl", 00:05:13.922 "sock_impl_set_options", 00:05:13.922 "sock_impl_get_options", 00:05:13.922 "iobuf_get_stats", 00:05:13.922 "iobuf_set_options", 00:05:13.922 "keyring_get_keys", 00:05:13.922 "framework_get_pci_devices", 00:05:13.922 "framework_get_config", 00:05:13.922 "framework_get_subsystems", 00:05:13.922 "fsdev_set_opts", 00:05:13.922 "fsdev_get_opts", 00:05:13.922 "trace_get_info", 00:05:13.922 "trace_get_tpoint_group_mask", 00:05:13.922 "trace_disable_tpoint_group", 00:05:13.922 "trace_enable_tpoint_group", 00:05:13.922 "trace_clear_tpoint_mask", 00:05:13.922 "trace_set_tpoint_mask", 00:05:13.922 "notify_get_notifications", 00:05:13.922 "notify_get_types", 00:05:13.922 "spdk_get_version", 00:05:13.922 "rpc_get_methods" 00:05:13.922 ] 00:05:13.922 11:57:17 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:13.922 11:57:17 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:13.922 11:57:17 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:13.922 11:57:17 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:13.922 11:57:17 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57927 00:05:13.922 11:57:17 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57927 ']' 00:05:13.922 11:57:17 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57927 00:05:13.922 11:57:17 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:13.922 11:57:17 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:13.922 11:57:17 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57927 00:05:13.922 killing process with pid 57927 00:05:13.922 11:57:17 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:13.922 11:57:17 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:13.922 11:57:17 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57927' 00:05:13.923 11:57:17 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57927 00:05:13.923 11:57:17 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57927 00:05:17.219 ************************************ 00:05:17.219 END TEST spdkcli_tcp 00:05:17.219 ************************************ 00:05:17.219 00:05:17.219 real 0m4.611s 00:05:17.219 user 0m8.078s 00:05:17.219 sys 0m0.813s 00:05:17.219 11:57:19 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:17.219 11:57:19 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:17.219 11:57:19 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:17.219 11:57:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:17.219 11:57:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:17.219 11:57:19 -- common/autotest_common.sh@10 -- # set +x 00:05:17.219 ************************************ 00:05:17.219 START TEST dpdk_mem_utility 00:05:17.219 ************************************ 00:05:17.219 11:57:19 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:17.219 * Looking for test storage... 00:05:17.219 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:17.219 11:57:20 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:17.219 11:57:20 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:17.219 11:57:20 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:17.219 11:57:20 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:17.219 11:57:20 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:17.219 11:57:20 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:17.219 11:57:20 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:17.219 11:57:20 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:17.219 11:57:20 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:17.219 11:57:20 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:17.219 11:57:20 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:17.219 11:57:20 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:17.219 11:57:20 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:17.219 11:57:20 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:17.219 11:57:20 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:17.219 11:57:20 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:17.219 11:57:20 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:17.219 11:57:20 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:17.219 11:57:20 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:17.219 11:57:20 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:17.219 11:57:20 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:17.219 11:57:20 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:17.219 11:57:20 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:17.219 11:57:20 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:17.219 11:57:20 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:17.219 11:57:20 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:17.219 11:57:20 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:17.219 11:57:20 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:17.219 11:57:20 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:17.219 11:57:20 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:17.219 11:57:20 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:17.219 11:57:20 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:17.219 11:57:20 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:17.219 11:57:20 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:17.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.219 --rc genhtml_branch_coverage=1 00:05:17.219 --rc genhtml_function_coverage=1 00:05:17.219 --rc genhtml_legend=1 00:05:17.219 --rc geninfo_all_blocks=1 00:05:17.219 --rc geninfo_unexecuted_blocks=1 00:05:17.219 00:05:17.219 ' 00:05:17.219 11:57:20 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:17.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.219 --rc genhtml_branch_coverage=1 00:05:17.219 --rc genhtml_function_coverage=1 00:05:17.219 --rc genhtml_legend=1 00:05:17.219 --rc geninfo_all_blocks=1 00:05:17.219 --rc geninfo_unexecuted_blocks=1 00:05:17.219 00:05:17.219 ' 00:05:17.219 11:57:20 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:17.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.219 --rc genhtml_branch_coverage=1 00:05:17.219 --rc genhtml_function_coverage=1 00:05:17.219 --rc genhtml_legend=1 00:05:17.219 --rc geninfo_all_blocks=1 00:05:17.219 --rc geninfo_unexecuted_blocks=1 00:05:17.219 00:05:17.219 ' 00:05:17.219 11:57:20 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:17.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.219 --rc genhtml_branch_coverage=1 00:05:17.219 --rc genhtml_function_coverage=1 00:05:17.219 --rc genhtml_legend=1 00:05:17.219 --rc geninfo_all_blocks=1 00:05:17.219 --rc geninfo_unexecuted_blocks=1 00:05:17.219 00:05:17.219 ' 00:05:17.219 11:57:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:17.219 11:57:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58054 00:05:17.219 11:57:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:17.219 11:57:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58054 00:05:17.219 11:57:20 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58054 ']' 00:05:17.219 11:57:20 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.219 11:57:20 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:17.219 11:57:20 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.219 11:57:20 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:17.219 11:57:20 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:17.219 [2024-11-19 11:57:20.263240] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:05:17.219 [2024-11-19 11:57:20.263423] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58054 ] 00:05:17.220 [2024-11-19 11:57:20.422796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.220 [2024-11-19 11:57:20.561470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.605 11:57:21 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:18.605 11:57:21 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:18.605 11:57:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:18.605 11:57:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:18.605 11:57:21 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.605 11:57:21 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:18.605 { 00:05:18.605 "filename": "/tmp/spdk_mem_dump.txt" 00:05:18.605 } 00:05:18.605 11:57:21 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.605 11:57:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:18.605 DPDK memory size 816.000000 MiB in 1 heap(s) 00:05:18.605 1 heaps totaling size 816.000000 MiB 00:05:18.605 size: 816.000000 MiB heap id: 0 00:05:18.605 end heaps---------- 00:05:18.605 9 mempools totaling size 595.772034 MiB 00:05:18.605 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:18.605 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:18.605 size: 92.545471 MiB name: bdev_io_58054 00:05:18.605 size: 50.003479 MiB name: msgpool_58054 00:05:18.605 size: 36.509338 MiB name: fsdev_io_58054 00:05:18.605 size: 21.763794 MiB name: PDU_Pool 00:05:18.606 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:18.606 size: 4.133484 MiB name: evtpool_58054 00:05:18.606 size: 0.026123 MiB name: Session_Pool 00:05:18.606 end mempools------- 00:05:18.606 6 memzones totaling size 4.142822 MiB 00:05:18.606 size: 1.000366 MiB name: RG_ring_0_58054 00:05:18.606 size: 1.000366 MiB name: RG_ring_1_58054 00:05:18.606 size: 1.000366 MiB name: RG_ring_4_58054 00:05:18.606 size: 1.000366 MiB name: RG_ring_5_58054 00:05:18.606 size: 0.125366 MiB name: RG_ring_2_58054 00:05:18.606 size: 0.015991 MiB name: RG_ring_3_58054 00:05:18.606 end memzones------- 00:05:18.606 11:57:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:18.606 heap id: 0 total size: 816.000000 MiB number of busy elements: 320 number of free elements: 18 00:05:18.606 list of free elements. size: 16.790161 MiB 00:05:18.606 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:18.606 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:18.606 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:18.606 element at address: 0x200018d00040 with size: 0.999939 MiB 00:05:18.606 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:18.606 element at address: 0x200019200000 with size: 0.999084 MiB 00:05:18.606 element at address: 0x200031e00000 with size: 0.994324 MiB 00:05:18.606 element at address: 0x200000400000 with size: 0.992004 MiB 00:05:18.606 element at address: 0x200018a00000 with size: 0.959656 MiB 00:05:18.606 element at address: 0x200019500040 with size: 0.936401 MiB 00:05:18.606 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:18.606 element at address: 0x20001ac00000 with size: 0.560486 MiB 00:05:18.606 element at address: 0x200000c00000 with size: 0.490173 MiB 00:05:18.606 element at address: 0x200018e00000 with size: 0.487976 MiB 00:05:18.606 element at address: 0x200019600000 with size: 0.485413 MiB 00:05:18.606 element at address: 0x200012c00000 with size: 0.443481 MiB 00:05:18.606 element at address: 0x200028000000 with size: 0.390442 MiB 00:05:18.606 element at address: 0x200000800000 with size: 0.350891 MiB 00:05:18.606 list of standard malloc elements. size: 199.288940 MiB 00:05:18.606 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:18.606 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:18.606 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:05:18.606 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:18.606 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:18.606 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:18.606 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:05:18.606 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:18.606 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:05:18.606 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:05:18.606 element at address: 0x200012bff040 with size: 0.000305 MiB 00:05:18.606 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:18.606 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:18.606 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:05:18.606 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:05:18.606 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:05:18.606 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:05:18.606 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:05:18.606 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:05:18.606 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:05:18.606 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:05:18.606 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:05:18.606 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:05:18.606 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:05:18.606 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:05:18.606 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:05:18.606 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:05:18.606 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:05:18.606 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:05:18.606 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:05:18.606 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:05:18.606 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:05:18.606 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:05:18.606 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:05:18.606 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:05:18.606 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:05:18.606 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:05:18.606 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:05:18.606 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:18.606 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:18.606 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:05:18.606 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:18.606 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:18.606 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:05:18.606 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:05:18.606 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:05:18.606 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:05:18.606 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:05:18.606 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:05:18.606 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:05:18.606 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:05:18.606 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:05:18.606 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:05:18.606 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:05:18.606 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:05:18.606 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:05:18.606 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:05:18.606 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:05:18.606 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:05:18.606 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:05:18.606 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:05:18.606 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:18.606 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:18.606 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:18.606 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:18.606 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:05:18.606 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:05:18.606 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:05:18.606 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:05:18.606 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:05:18.606 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:05:18.606 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:05:18.606 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:05:18.606 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:05:18.606 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:05:18.606 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:05:18.606 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:05:18.606 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:05:18.606 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:05:18.606 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:05:18.606 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:05:18.606 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:05:18.606 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:05:18.606 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:05:18.606 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:05:18.606 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:05:18.606 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:18.606 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:18.606 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:05:18.606 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:05:18.606 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:05:18.606 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:05:18.606 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:05:18.606 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:05:18.606 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:05:18.606 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:05:18.606 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:05:18.606 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:05:18.606 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:18.606 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:18.606 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:18.606 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:18.606 element at address: 0x200012bff180 with size: 0.000244 MiB 00:05:18.606 element at address: 0x200012bff280 with size: 0.000244 MiB 00:05:18.606 element at address: 0x200012bff380 with size: 0.000244 MiB 00:05:18.606 element at address: 0x200012bff480 with size: 0.000244 MiB 00:05:18.606 element at address: 0x200012bff580 with size: 0.000244 MiB 00:05:18.606 element at address: 0x200012bff680 with size: 0.000244 MiB 00:05:18.606 element at address: 0x200012bff780 with size: 0.000244 MiB 00:05:18.606 element at address: 0x200012bff880 with size: 0.000244 MiB 00:05:18.606 element at address: 0x200012bff980 with size: 0.000244 MiB 00:05:18.606 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:05:18.606 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:18.606 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:18.606 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:18.606 element at address: 0x200012c71880 with size: 0.000244 MiB 00:05:18.607 element at address: 0x200012c71980 with size: 0.000244 MiB 00:05:18.607 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:05:18.607 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:05:18.607 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:05:18.607 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:05:18.607 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:05:18.607 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:05:18.607 element at address: 0x200012c72080 with size: 0.000244 MiB 00:05:18.607 element at address: 0x200012c72180 with size: 0.000244 MiB 00:05:18.607 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:05:18.607 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:05:18.607 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:05:18.607 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac8f7c0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac8f8c0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac8f9c0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac8fac0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac8fbc0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac8fcc0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac8fdc0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac8fec0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:05:18.607 element at address: 0x200028063f40 with size: 0.000244 MiB 00:05:18.607 element at address: 0x200028064040 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20002806af80 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20002806b080 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20002806b180 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20002806b280 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20002806b380 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20002806b480 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20002806b580 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20002806b680 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20002806b780 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20002806b880 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20002806b980 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20002806be80 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20002806c080 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20002806c180 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20002806c280 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20002806c380 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20002806c480 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20002806c580 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20002806c680 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20002806c780 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20002806c880 with size: 0.000244 MiB 00:05:18.607 element at address: 0x20002806c980 with size: 0.000244 MiB 00:05:18.608 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:05:18.608 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:05:18.608 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:05:18.608 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:05:18.608 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:05:18.608 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:05:18.608 element at address: 0x20002806d080 with size: 0.000244 MiB 00:05:18.608 element at address: 0x20002806d180 with size: 0.000244 MiB 00:05:18.608 element at address: 0x20002806d280 with size: 0.000244 MiB 00:05:18.608 element at address: 0x20002806d380 with size: 0.000244 MiB 00:05:18.608 element at address: 0x20002806d480 with size: 0.000244 MiB 00:05:18.608 element at address: 0x20002806d580 with size: 0.000244 MiB 00:05:18.608 element at address: 0x20002806d680 with size: 0.000244 MiB 00:05:18.608 element at address: 0x20002806d780 with size: 0.000244 MiB 00:05:18.608 element at address: 0x20002806d880 with size: 0.000244 MiB 00:05:18.608 element at address: 0x20002806d980 with size: 0.000244 MiB 00:05:18.608 element at address: 0x20002806da80 with size: 0.000244 MiB 00:05:18.608 element at address: 0x20002806db80 with size: 0.000244 MiB 00:05:18.608 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:05:18.608 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:05:18.608 element at address: 0x20002806de80 with size: 0.000244 MiB 00:05:18.608 element at address: 0x20002806df80 with size: 0.000244 MiB 00:05:18.608 element at address: 0x20002806e080 with size: 0.000244 MiB 00:05:18.608 element at address: 0x20002806e180 with size: 0.000244 MiB 00:05:18.608 element at address: 0x20002806e280 with size: 0.000244 MiB 00:05:18.608 element at address: 0x20002806e380 with size: 0.000244 MiB 00:05:18.608 element at address: 0x20002806e480 with size: 0.000244 MiB 00:05:18.608 element at address: 0x20002806e580 with size: 0.000244 MiB 00:05:18.608 element at address: 0x20002806e680 with size: 0.000244 MiB 00:05:18.608 element at address: 0x20002806e780 with size: 0.000244 MiB 00:05:18.608 element at address: 0x20002806e880 with size: 0.000244 MiB 00:05:18.608 element at address: 0x20002806e980 with size: 0.000244 MiB 00:05:18.608 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:05:18.608 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:05:18.608 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:05:18.608 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:05:18.608 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:05:18.608 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:05:18.608 element at address: 0x20002806f080 with size: 0.000244 MiB 00:05:18.608 element at address: 0x20002806f180 with size: 0.000244 MiB 00:05:18.608 element at address: 0x20002806f280 with size: 0.000244 MiB 00:05:18.608 element at address: 0x20002806f380 with size: 0.000244 MiB 00:05:18.608 element at address: 0x20002806f480 with size: 0.000244 MiB 00:05:18.608 element at address: 0x20002806f580 with size: 0.000244 MiB 00:05:18.608 element at address: 0x20002806f680 with size: 0.000244 MiB 00:05:18.608 element at address: 0x20002806f780 with size: 0.000244 MiB 00:05:18.608 element at address: 0x20002806f880 with size: 0.000244 MiB 00:05:18.608 element at address: 0x20002806f980 with size: 0.000244 MiB 00:05:18.608 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:05:18.608 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:05:18.608 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:05:18.608 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:05:18.608 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:05:18.608 list of memzone associated elements. size: 599.920898 MiB 00:05:18.608 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:05:18.608 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:18.608 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:05:18.608 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:18.608 element at address: 0x200012df4740 with size: 92.045105 MiB 00:05:18.608 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58054_0 00:05:18.608 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:18.608 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58054_0 00:05:18.608 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:18.608 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58054_0 00:05:18.608 element at address: 0x2000197be900 with size: 20.255615 MiB 00:05:18.608 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:18.608 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:05:18.608 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:18.608 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:18.608 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58054_0 00:05:18.608 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:18.608 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58054 00:05:18.608 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:18.608 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58054 00:05:18.608 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:18.608 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:18.608 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:05:18.608 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:18.608 element at address: 0x200018afde00 with size: 1.008179 MiB 00:05:18.608 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:18.608 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:05:18.608 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:18.608 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:18.608 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58054 00:05:18.608 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:18.608 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58054 00:05:18.608 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:05:18.608 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58054 00:05:18.608 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:05:18.608 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58054 00:05:18.608 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:18.608 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58054 00:05:18.608 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:18.608 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58054 00:05:18.608 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:05:18.608 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:18.608 element at address: 0x200012c72280 with size: 0.500549 MiB 00:05:18.608 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:18.608 element at address: 0x20001967c440 with size: 0.250549 MiB 00:05:18.608 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:18.608 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:18.608 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58054 00:05:18.608 element at address: 0x20000085df80 with size: 0.125549 MiB 00:05:18.608 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58054 00:05:18.608 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:05:18.608 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:18.608 element at address: 0x200028064140 with size: 0.023804 MiB 00:05:18.608 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:18.608 element at address: 0x200000859d40 with size: 0.016174 MiB 00:05:18.608 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58054 00:05:18.608 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:05:18.608 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:18.608 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:05:18.608 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58054 00:05:18.608 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:18.608 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58054 00:05:18.608 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:18.608 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58054 00:05:18.608 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:05:18.608 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:18.608 11:57:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:18.608 11:57:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58054 00:05:18.608 11:57:21 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58054 ']' 00:05:18.608 11:57:21 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58054 00:05:18.608 11:57:21 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:18.608 11:57:21 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:18.608 11:57:21 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58054 00:05:18.608 11:57:21 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:18.608 11:57:21 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:18.608 11:57:21 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58054' 00:05:18.608 killing process with pid 58054 00:05:18.608 11:57:21 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58054 00:05:18.608 11:57:21 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58054 00:05:21.148 00:05:21.148 real 0m4.432s 00:05:21.148 user 0m4.163s 00:05:21.148 sys 0m0.747s 00:05:21.148 11:57:24 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.148 11:57:24 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:21.148 ************************************ 00:05:21.148 END TEST dpdk_mem_utility 00:05:21.148 ************************************ 00:05:21.148 11:57:24 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:21.148 11:57:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:21.148 11:57:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.148 11:57:24 -- common/autotest_common.sh@10 -- # set +x 00:05:21.148 ************************************ 00:05:21.148 START TEST event 00:05:21.148 ************************************ 00:05:21.148 11:57:24 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:21.408 * Looking for test storage... 00:05:21.408 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:21.408 11:57:24 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:21.408 11:57:24 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:21.408 11:57:24 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:21.408 11:57:24 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:21.408 11:57:24 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:21.408 11:57:24 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:21.408 11:57:24 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:21.408 11:57:24 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:21.408 11:57:24 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:21.408 11:57:24 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:21.408 11:57:24 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:21.408 11:57:24 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:21.408 11:57:24 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:21.408 11:57:24 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:21.408 11:57:24 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:21.408 11:57:24 event -- scripts/common.sh@344 -- # case "$op" in 00:05:21.408 11:57:24 event -- scripts/common.sh@345 -- # : 1 00:05:21.408 11:57:24 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:21.408 11:57:24 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:21.408 11:57:24 event -- scripts/common.sh@365 -- # decimal 1 00:05:21.408 11:57:24 event -- scripts/common.sh@353 -- # local d=1 00:05:21.408 11:57:24 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:21.408 11:57:24 event -- scripts/common.sh@355 -- # echo 1 00:05:21.408 11:57:24 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:21.408 11:57:24 event -- scripts/common.sh@366 -- # decimal 2 00:05:21.408 11:57:24 event -- scripts/common.sh@353 -- # local d=2 00:05:21.408 11:57:24 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:21.408 11:57:24 event -- scripts/common.sh@355 -- # echo 2 00:05:21.408 11:57:24 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:21.408 11:57:24 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:21.408 11:57:24 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:21.408 11:57:24 event -- scripts/common.sh@368 -- # return 0 00:05:21.408 11:57:24 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:21.408 11:57:24 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:21.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.408 --rc genhtml_branch_coverage=1 00:05:21.409 --rc genhtml_function_coverage=1 00:05:21.409 --rc genhtml_legend=1 00:05:21.409 --rc geninfo_all_blocks=1 00:05:21.409 --rc geninfo_unexecuted_blocks=1 00:05:21.409 00:05:21.409 ' 00:05:21.409 11:57:24 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:21.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.409 --rc genhtml_branch_coverage=1 00:05:21.409 --rc genhtml_function_coverage=1 00:05:21.409 --rc genhtml_legend=1 00:05:21.409 --rc geninfo_all_blocks=1 00:05:21.409 --rc geninfo_unexecuted_blocks=1 00:05:21.409 00:05:21.409 ' 00:05:21.409 11:57:24 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:21.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.409 --rc genhtml_branch_coverage=1 00:05:21.409 --rc genhtml_function_coverage=1 00:05:21.409 --rc genhtml_legend=1 00:05:21.409 --rc geninfo_all_blocks=1 00:05:21.409 --rc geninfo_unexecuted_blocks=1 00:05:21.409 00:05:21.409 ' 00:05:21.409 11:57:24 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:21.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.409 --rc genhtml_branch_coverage=1 00:05:21.409 --rc genhtml_function_coverage=1 00:05:21.409 --rc genhtml_legend=1 00:05:21.409 --rc geninfo_all_blocks=1 00:05:21.409 --rc geninfo_unexecuted_blocks=1 00:05:21.409 00:05:21.409 ' 00:05:21.409 11:57:24 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:21.409 11:57:24 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:21.409 11:57:24 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:21.409 11:57:24 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:21.409 11:57:24 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.409 11:57:24 event -- common/autotest_common.sh@10 -- # set +x 00:05:21.409 ************************************ 00:05:21.409 START TEST event_perf 00:05:21.409 ************************************ 00:05:21.409 11:57:24 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:21.409 Running I/O for 1 seconds...[2024-11-19 11:57:24.729762] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:05:21.409 [2024-11-19 11:57:24.729906] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58168 ] 00:05:21.668 [2024-11-19 11:57:24.904950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:21.930 [2024-11-19 11:57:25.055571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:21.930 [2024-11-19 11:57:25.055760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:21.930 [2024-11-19 11:57:25.055922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.930 Running I/O for 1 seconds...[2024-11-19 11:57:25.055945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:23.320 00:05:23.320 lcore 0: 93007 00:05:23.320 lcore 1: 93010 00:05:23.320 lcore 2: 93014 00:05:23.320 lcore 3: 93008 00:05:23.320 done. 00:05:23.320 00:05:23.320 real 0m1.640s 00:05:23.320 user 0m4.377s 00:05:23.320 sys 0m0.136s 00:05:23.320 11:57:26 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.320 11:57:26 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:23.320 ************************************ 00:05:23.320 END TEST event_perf 00:05:23.320 ************************************ 00:05:23.320 11:57:26 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:23.320 11:57:26 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:23.320 11:57:26 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.320 11:57:26 event -- common/autotest_common.sh@10 -- # set +x 00:05:23.320 ************************************ 00:05:23.320 START TEST event_reactor 00:05:23.320 ************************************ 00:05:23.320 11:57:26 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:23.320 [2024-11-19 11:57:26.441743] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:05:23.320 [2024-11-19 11:57:26.441905] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58206 ] 00:05:23.320 [2024-11-19 11:57:26.619147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.580 [2024-11-19 11:57:26.753115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.964 test_start 00:05:24.964 oneshot 00:05:24.964 tick 100 00:05:24.964 tick 100 00:05:24.964 tick 250 00:05:24.964 tick 100 00:05:24.964 tick 100 00:05:24.964 tick 250 00:05:24.964 tick 100 00:05:24.964 tick 500 00:05:24.964 tick 100 00:05:24.964 tick 100 00:05:24.964 tick 250 00:05:24.964 tick 100 00:05:24.964 tick 100 00:05:24.964 test_end 00:05:24.964 00:05:24.964 real 0m1.602s 00:05:24.964 user 0m1.370s 00:05:24.964 sys 0m0.120s 00:05:24.964 11:57:27 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:24.964 11:57:27 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:24.964 ************************************ 00:05:24.964 END TEST event_reactor 00:05:24.964 ************************************ 00:05:24.964 11:57:28 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:24.964 11:57:28 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:24.964 11:57:28 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.964 11:57:28 event -- common/autotest_common.sh@10 -- # set +x 00:05:24.964 ************************************ 00:05:24.964 START TEST event_reactor_perf 00:05:24.964 ************************************ 00:05:24.964 11:57:28 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:24.964 [2024-11-19 11:57:28.112440] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:05:24.964 [2024-11-19 11:57:28.112542] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58244 ] 00:05:24.964 [2024-11-19 11:57:28.292661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.223 [2024-11-19 11:57:28.426720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.603 test_start 00:05:26.603 test_end 00:05:26.603 Performance: 396398 events per second 00:05:26.603 00:05:26.603 real 0m1.613s 00:05:26.603 user 0m1.386s 00:05:26.603 sys 0m0.117s 00:05:26.603 ************************************ 00:05:26.603 END TEST event_reactor_perf 00:05:26.603 ************************************ 00:05:26.603 11:57:29 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.603 11:57:29 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:26.603 11:57:29 event -- event/event.sh@49 -- # uname -s 00:05:26.603 11:57:29 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:26.603 11:57:29 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:26.603 11:57:29 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.603 11:57:29 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.603 11:57:29 event -- common/autotest_common.sh@10 -- # set +x 00:05:26.603 ************************************ 00:05:26.603 START TEST event_scheduler 00:05:26.603 ************************************ 00:05:26.603 11:57:29 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:26.603 * Looking for test storage... 00:05:26.603 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:26.603 11:57:29 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:26.603 11:57:29 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:26.603 11:57:29 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:26.603 11:57:29 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:26.603 11:57:29 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:26.603 11:57:29 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:26.603 11:57:29 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:26.603 11:57:29 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:26.603 11:57:29 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:26.603 11:57:29 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:26.603 11:57:29 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:26.603 11:57:29 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:26.603 11:57:29 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:26.603 11:57:29 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:26.603 11:57:29 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:26.603 11:57:29 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:26.603 11:57:29 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:26.603 11:57:29 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:26.603 11:57:29 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:26.603 11:57:29 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:26.603 11:57:29 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:26.603 11:57:29 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:26.603 11:57:29 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:26.603 11:57:29 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:26.603 11:57:29 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:26.603 11:57:29 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:26.603 11:57:29 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:26.603 11:57:29 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:26.863 11:57:29 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:26.863 11:57:29 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:26.863 11:57:29 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:26.863 11:57:29 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:26.863 11:57:29 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:26.863 11:57:29 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:26.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.863 --rc genhtml_branch_coverage=1 00:05:26.863 --rc genhtml_function_coverage=1 00:05:26.863 --rc genhtml_legend=1 00:05:26.863 --rc geninfo_all_blocks=1 00:05:26.863 --rc geninfo_unexecuted_blocks=1 00:05:26.863 00:05:26.863 ' 00:05:26.863 11:57:29 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:26.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.863 --rc genhtml_branch_coverage=1 00:05:26.863 --rc genhtml_function_coverage=1 00:05:26.863 --rc genhtml_legend=1 00:05:26.863 --rc geninfo_all_blocks=1 00:05:26.863 --rc geninfo_unexecuted_blocks=1 00:05:26.863 00:05:26.863 ' 00:05:26.863 11:57:29 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:26.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.863 --rc genhtml_branch_coverage=1 00:05:26.863 --rc genhtml_function_coverage=1 00:05:26.863 --rc genhtml_legend=1 00:05:26.863 --rc geninfo_all_blocks=1 00:05:26.863 --rc geninfo_unexecuted_blocks=1 00:05:26.863 00:05:26.863 ' 00:05:26.863 11:57:29 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:26.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.863 --rc genhtml_branch_coverage=1 00:05:26.863 --rc genhtml_function_coverage=1 00:05:26.863 --rc genhtml_legend=1 00:05:26.863 --rc geninfo_all_blocks=1 00:05:26.863 --rc geninfo_unexecuted_blocks=1 00:05:26.863 00:05:26.863 ' 00:05:26.863 11:57:29 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:26.863 11:57:29 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58320 00:05:26.863 11:57:29 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:26.863 11:57:29 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:26.863 11:57:29 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58320 00:05:26.863 11:57:29 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58320 ']' 00:05:26.863 11:57:29 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.863 11:57:29 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:26.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.863 11:57:29 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.863 11:57:29 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:26.863 11:57:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:26.863 [2024-11-19 11:57:30.064243] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:05:26.863 [2024-11-19 11:57:30.064354] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58320 ] 00:05:27.123 [2024-11-19 11:57:30.237884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:27.123 [2024-11-19 11:57:30.366137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.123 [2024-11-19 11:57:30.366326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:27.123 [2024-11-19 11:57:30.366647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:27.123 [2024-11-19 11:57:30.366683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:27.693 11:57:30 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:27.693 11:57:30 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:27.693 11:57:30 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:27.693 11:57:30 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:27.693 11:57:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:27.693 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:27.693 POWER: Cannot set governor of lcore 0 to userspace 00:05:27.693 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:27.693 POWER: Cannot set governor of lcore 0 to performance 00:05:27.693 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:27.693 POWER: Cannot set governor of lcore 0 to userspace 00:05:27.693 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:27.693 POWER: Cannot set governor of lcore 0 to userspace 00:05:27.693 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:27.693 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:27.693 POWER: Unable to set Power Management Environment for lcore 0 00:05:27.693 [2024-11-19 11:57:30.915809] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:27.693 [2024-11-19 11:57:30.915833] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:27.693 [2024-11-19 11:57:30.915845] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:27.693 [2024-11-19 11:57:30.915866] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:27.693 [2024-11-19 11:57:30.915875] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:27.693 [2024-11-19 11:57:30.915885] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:27.693 11:57:30 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:27.693 11:57:30 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:27.693 11:57:30 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:27.693 11:57:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:27.965 [2024-11-19 11:57:31.226459] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:27.965 11:57:31 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:27.965 11:57:31 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:27.965 11:57:31 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:27.965 11:57:31 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:27.965 11:57:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:27.965 ************************************ 00:05:27.965 START TEST scheduler_create_thread 00:05:27.965 ************************************ 00:05:27.965 11:57:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:27.965 11:57:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:27.965 11:57:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:27.965 11:57:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.965 2 00:05:27.965 11:57:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:27.965 11:57:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:27.965 11:57:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:27.965 11:57:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.965 3 00:05:27.965 11:57:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:27.965 11:57:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:27.965 11:57:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:27.965 11:57:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.965 4 00:05:27.965 11:57:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:27.965 11:57:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:27.965 11:57:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:27.965 11:57:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.965 5 00:05:27.965 11:57:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:27.965 11:57:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:27.965 11:57:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:27.966 11:57:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.966 6 00:05:27.966 11:57:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:27.966 11:57:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:27.966 11:57:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:27.966 11:57:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.966 7 00:05:27.966 11:57:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:27.966 11:57:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:27.966 11:57:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:27.966 11:57:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.966 8 00:05:27.966 11:57:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:27.966 11:57:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:27.966 11:57:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:27.966 11:57:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.238 9 00:05:28.238 11:57:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:28.238 11:57:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:28.238 11:57:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:28.238 11:57:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.238 10 00:05:28.238 11:57:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:28.238 11:57:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:28.238 11:57:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:28.238 11:57:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:29.619 11:57:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:29.619 11:57:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:29.619 11:57:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:29.619 11:57:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.619 11:57:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.189 11:57:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.189 11:57:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:30.189 11:57:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.189 11:57:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.126 11:57:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.126 11:57:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:31.126 11:57:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:31.126 11:57:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.126 11:57:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.062 11:57:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.062 00:05:32.062 ************************************ 00:05:32.062 END TEST scheduler_create_thread 00:05:32.062 ************************************ 00:05:32.062 real 0m3.886s 00:05:32.062 user 0m0.019s 00:05:32.062 sys 0m0.008s 00:05:32.062 11:57:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.062 11:57:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.062 11:57:35 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:32.062 11:57:35 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58320 00:05:32.062 11:57:35 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58320 ']' 00:05:32.062 11:57:35 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58320 00:05:32.062 11:57:35 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:32.062 11:57:35 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:32.062 11:57:35 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58320 00:05:32.062 killing process with pid 58320 00:05:32.062 11:57:35 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:32.062 11:57:35 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:32.062 11:57:35 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58320' 00:05:32.062 11:57:35 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58320 00:05:32.062 11:57:35 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58320 00:05:32.322 [2024-11-19 11:57:35.500662] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:33.702 00:05:33.702 real 0m7.004s 00:05:33.702 user 0m14.533s 00:05:33.702 sys 0m0.500s 00:05:33.702 11:57:36 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.702 ************************************ 00:05:33.702 END TEST event_scheduler 00:05:33.702 ************************************ 00:05:33.702 11:57:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:33.702 11:57:36 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:33.702 11:57:36 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:33.702 11:57:36 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:33.702 11:57:36 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.702 11:57:36 event -- common/autotest_common.sh@10 -- # set +x 00:05:33.702 ************************************ 00:05:33.702 START TEST app_repeat 00:05:33.702 ************************************ 00:05:33.702 11:57:36 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:33.702 11:57:36 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.702 11:57:36 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.702 11:57:36 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:33.702 11:57:36 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:33.702 11:57:36 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:33.702 11:57:36 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:33.702 11:57:36 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:33.702 11:57:36 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58437 00:05:33.702 11:57:36 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:33.702 11:57:36 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:33.702 11:57:36 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58437' 00:05:33.702 Process app_repeat pid: 58437 00:05:33.702 11:57:36 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:33.702 spdk_app_start Round 0 00:05:33.702 11:57:36 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:33.702 11:57:36 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58437 /var/tmp/spdk-nbd.sock 00:05:33.702 11:57:36 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58437 ']' 00:05:33.702 11:57:36 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:33.702 11:57:36 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:33.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:33.702 11:57:36 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:33.702 11:57:36 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:33.702 11:57:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:33.702 [2024-11-19 11:57:36.883553] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:05:33.702 [2024-11-19 11:57:36.883663] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58437 ] 00:05:33.702 [2024-11-19 11:57:37.059224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:33.963 [2024-11-19 11:57:37.178146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.963 [2024-11-19 11:57:37.178194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.533 11:57:37 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:34.533 11:57:37 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:34.533 11:57:37 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:34.794 Malloc0 00:05:34.794 11:57:38 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:35.054 Malloc1 00:05:35.054 11:57:38 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:35.054 11:57:38 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.054 11:57:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:35.054 11:57:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:35.054 11:57:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.054 11:57:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:35.054 11:57:38 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:35.054 11:57:38 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.054 11:57:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:35.054 11:57:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:35.054 11:57:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.054 11:57:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:35.054 11:57:38 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:35.054 11:57:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:35.054 11:57:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.054 11:57:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:35.314 /dev/nbd0 00:05:35.314 11:57:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:35.314 11:57:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:35.314 11:57:38 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:35.314 11:57:38 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:35.314 11:57:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:35.314 11:57:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:35.314 11:57:38 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:35.314 11:57:38 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:35.314 11:57:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:35.314 11:57:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:35.314 11:57:38 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:35.314 1+0 records in 00:05:35.314 1+0 records out 00:05:35.314 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000184733 s, 22.2 MB/s 00:05:35.314 11:57:38 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:35.314 11:57:38 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:35.314 11:57:38 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:35.314 11:57:38 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:35.314 11:57:38 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:35.314 11:57:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:35.314 11:57:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.314 11:57:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:35.574 /dev/nbd1 00:05:35.574 11:57:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:35.574 11:57:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:35.574 11:57:38 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:35.574 11:57:38 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:35.574 11:57:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:35.574 11:57:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:35.574 11:57:38 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:35.574 11:57:38 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:35.574 11:57:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:35.574 11:57:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:35.574 11:57:38 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:35.574 1+0 records in 00:05:35.574 1+0 records out 00:05:35.574 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000360994 s, 11.3 MB/s 00:05:35.574 11:57:38 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:35.574 11:57:38 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:35.574 11:57:38 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:35.574 11:57:38 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:35.574 11:57:38 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:35.574 11:57:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:35.574 11:57:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.574 11:57:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:35.574 11:57:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.574 11:57:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:35.834 11:57:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:35.834 { 00:05:35.834 "nbd_device": "/dev/nbd0", 00:05:35.834 "bdev_name": "Malloc0" 00:05:35.834 }, 00:05:35.834 { 00:05:35.834 "nbd_device": "/dev/nbd1", 00:05:35.834 "bdev_name": "Malloc1" 00:05:35.834 } 00:05:35.834 ]' 00:05:35.834 11:57:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:35.834 { 00:05:35.834 "nbd_device": "/dev/nbd0", 00:05:35.834 "bdev_name": "Malloc0" 00:05:35.834 }, 00:05:35.834 { 00:05:35.834 "nbd_device": "/dev/nbd1", 00:05:35.834 "bdev_name": "Malloc1" 00:05:35.834 } 00:05:35.834 ]' 00:05:35.834 11:57:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:35.834 11:57:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:35.834 /dev/nbd1' 00:05:35.835 11:57:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:35.835 11:57:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:35.835 /dev/nbd1' 00:05:35.835 11:57:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:35.835 11:57:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:35.835 11:57:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:35.835 11:57:39 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:35.835 11:57:39 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:35.835 11:57:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.835 11:57:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:35.835 11:57:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:35.835 11:57:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:35.835 11:57:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:35.835 11:57:39 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:35.835 256+0 records in 00:05:35.835 256+0 records out 00:05:35.835 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0188323 s, 55.7 MB/s 00:05:35.835 11:57:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:35.835 11:57:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:35.835 256+0 records in 00:05:35.835 256+0 records out 00:05:35.835 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0265178 s, 39.5 MB/s 00:05:35.835 11:57:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:35.835 11:57:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:35.835 256+0 records in 00:05:35.835 256+0 records out 00:05:35.835 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0287406 s, 36.5 MB/s 00:05:35.835 11:57:39 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:35.835 11:57:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.835 11:57:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:35.835 11:57:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:35.835 11:57:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:35.835 11:57:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:35.835 11:57:39 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:35.835 11:57:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:35.835 11:57:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:35.835 11:57:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:35.835 11:57:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:35.835 11:57:39 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:35.835 11:57:39 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:35.835 11:57:39 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.835 11:57:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.835 11:57:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:35.835 11:57:39 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:35.835 11:57:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:35.835 11:57:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:36.094 11:57:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:36.094 11:57:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:36.094 11:57:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:36.094 11:57:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:36.094 11:57:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:36.094 11:57:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:36.094 11:57:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:36.094 11:57:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:36.094 11:57:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:36.094 11:57:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:36.354 11:57:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:36.354 11:57:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:36.354 11:57:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:36.354 11:57:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:36.354 11:57:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:36.354 11:57:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:36.354 11:57:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:36.354 11:57:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:36.354 11:57:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:36.354 11:57:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.354 11:57:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:36.613 11:57:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:36.613 11:57:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:36.613 11:57:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:36.613 11:57:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:36.614 11:57:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:36.614 11:57:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:36.614 11:57:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:36.614 11:57:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:36.614 11:57:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:36.614 11:57:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:36.614 11:57:39 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:36.614 11:57:39 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:36.614 11:57:39 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:37.183 11:57:40 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:38.569 [2024-11-19 11:57:41.616443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:38.569 [2024-11-19 11:57:41.740709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.569 [2024-11-19 11:57:41.740708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:38.830 [2024-11-19 11:57:41.947659] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:38.830 [2024-11-19 11:57:41.947744] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:40.212 spdk_app_start Round 1 00:05:40.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:40.212 11:57:43 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:40.212 11:57:43 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:40.212 11:57:43 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58437 /var/tmp/spdk-nbd.sock 00:05:40.212 11:57:43 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58437 ']' 00:05:40.212 11:57:43 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:40.212 11:57:43 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:40.212 11:57:43 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:40.212 11:57:43 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:40.212 11:57:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:40.473 11:57:43 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.473 11:57:43 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:40.473 11:57:43 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:40.733 Malloc0 00:05:40.733 11:57:43 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:40.993 Malloc1 00:05:40.993 11:57:44 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:40.993 11:57:44 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.993 11:57:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.993 11:57:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:40.993 11:57:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.993 11:57:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:40.993 11:57:44 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:40.993 11:57:44 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.993 11:57:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.993 11:57:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:40.993 11:57:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.993 11:57:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:40.993 11:57:44 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:40.993 11:57:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:40.993 11:57:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.993 11:57:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:41.253 /dev/nbd0 00:05:41.253 11:57:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:41.253 11:57:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:41.253 11:57:44 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:41.253 11:57:44 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:41.253 11:57:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:41.253 11:57:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:41.253 11:57:44 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:41.253 11:57:44 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:41.253 11:57:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:41.253 11:57:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:41.253 11:57:44 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:41.253 1+0 records in 00:05:41.253 1+0 records out 00:05:41.253 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000268811 s, 15.2 MB/s 00:05:41.253 11:57:44 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:41.253 11:57:44 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:41.253 11:57:44 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:41.253 11:57:44 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:41.253 11:57:44 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:41.254 11:57:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:41.254 11:57:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:41.254 11:57:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:41.514 /dev/nbd1 00:05:41.514 11:57:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:41.514 11:57:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:41.514 11:57:44 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:41.514 11:57:44 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:41.514 11:57:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:41.514 11:57:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:41.514 11:57:44 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:41.514 11:57:44 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:41.514 11:57:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:41.514 11:57:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:41.514 11:57:44 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:41.514 1+0 records in 00:05:41.514 1+0 records out 00:05:41.514 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000300728 s, 13.6 MB/s 00:05:41.514 11:57:44 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:41.514 11:57:44 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:41.514 11:57:44 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:41.514 11:57:44 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:41.514 11:57:44 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:41.514 11:57:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:41.514 11:57:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:41.514 11:57:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:41.514 11:57:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.514 11:57:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:41.774 11:57:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:41.774 { 00:05:41.774 "nbd_device": "/dev/nbd0", 00:05:41.774 "bdev_name": "Malloc0" 00:05:41.774 }, 00:05:41.774 { 00:05:41.774 "nbd_device": "/dev/nbd1", 00:05:41.774 "bdev_name": "Malloc1" 00:05:41.774 } 00:05:41.774 ]' 00:05:41.774 11:57:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:41.774 { 00:05:41.774 "nbd_device": "/dev/nbd0", 00:05:41.774 "bdev_name": "Malloc0" 00:05:41.774 }, 00:05:41.774 { 00:05:41.774 "nbd_device": "/dev/nbd1", 00:05:41.774 "bdev_name": "Malloc1" 00:05:41.774 } 00:05:41.774 ]' 00:05:41.774 11:57:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:41.774 11:57:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:41.774 /dev/nbd1' 00:05:41.774 11:57:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:41.774 /dev/nbd1' 00:05:41.774 11:57:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:41.774 11:57:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:41.774 11:57:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:41.774 11:57:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:41.774 11:57:45 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:41.774 11:57:45 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:41.774 11:57:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.774 11:57:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:41.774 11:57:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:41.774 11:57:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:41.774 11:57:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:41.774 11:57:45 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:41.774 256+0 records in 00:05:41.774 256+0 records out 00:05:41.774 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.013693 s, 76.6 MB/s 00:05:41.774 11:57:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:41.774 11:57:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:41.774 256+0 records in 00:05:41.774 256+0 records out 00:05:41.774 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0289458 s, 36.2 MB/s 00:05:41.774 11:57:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:41.774 11:57:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:41.774 256+0 records in 00:05:41.774 256+0 records out 00:05:41.774 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0307063 s, 34.1 MB/s 00:05:41.774 11:57:45 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:41.774 11:57:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.774 11:57:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:41.774 11:57:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:42.035 11:57:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:42.035 11:57:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:42.035 11:57:45 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:42.035 11:57:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:42.035 11:57:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:42.035 11:57:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:42.035 11:57:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:42.035 11:57:45 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:42.035 11:57:45 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:42.035 11:57:45 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.035 11:57:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.035 11:57:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:42.035 11:57:45 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:42.035 11:57:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:42.035 11:57:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:42.035 11:57:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:42.296 11:57:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:42.296 11:57:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:42.296 11:57:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:42.296 11:57:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:42.296 11:57:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:42.296 11:57:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:42.296 11:57:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:42.296 11:57:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:42.296 11:57:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:42.296 11:57:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:42.296 11:57:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:42.296 11:57:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:42.296 11:57:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:42.296 11:57:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:42.296 11:57:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:42.296 11:57:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:42.296 11:57:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:42.296 11:57:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:42.296 11:57:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.296 11:57:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:42.556 11:57:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:42.556 11:57:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:42.556 11:57:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:42.556 11:57:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:42.556 11:57:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:42.556 11:57:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:42.817 11:57:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:42.817 11:57:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:42.817 11:57:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:42.817 11:57:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:42.817 11:57:45 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:42.817 11:57:45 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:42.817 11:57:45 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:43.077 11:57:46 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:44.457 [2024-11-19 11:57:47.648139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:44.457 [2024-11-19 11:57:47.774805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.457 [2024-11-19 11:57:47.774835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.743 [2024-11-19 11:57:47.985050] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:44.743 [2024-11-19 11:57:47.985134] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:46.143 spdk_app_start Round 2 00:05:46.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:46.143 11:57:49 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:46.143 11:57:49 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:46.143 11:57:49 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58437 /var/tmp/spdk-nbd.sock 00:05:46.143 11:57:49 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58437 ']' 00:05:46.143 11:57:49 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:46.143 11:57:49 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:46.143 11:57:49 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:46.143 11:57:49 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:46.143 11:57:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:46.406 11:57:49 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:46.406 11:57:49 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:46.406 11:57:49 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:46.665 Malloc0 00:05:46.665 11:57:49 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:46.925 Malloc1 00:05:46.925 11:57:50 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:46.925 11:57:50 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.925 11:57:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:46.925 11:57:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:46.925 11:57:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.925 11:57:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:46.925 11:57:50 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:46.925 11:57:50 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.925 11:57:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:46.925 11:57:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:46.925 11:57:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.925 11:57:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:46.925 11:57:50 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:46.925 11:57:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:46.925 11:57:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.925 11:57:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:47.184 /dev/nbd0 00:05:47.184 11:57:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:47.184 11:57:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:47.184 11:57:50 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:47.184 11:57:50 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:47.184 11:57:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:47.184 11:57:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:47.184 11:57:50 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:47.184 11:57:50 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:47.184 11:57:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:47.184 11:57:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:47.184 11:57:50 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:47.184 1+0 records in 00:05:47.184 1+0 records out 00:05:47.184 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000375718 s, 10.9 MB/s 00:05:47.184 11:57:50 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:47.184 11:57:50 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:47.184 11:57:50 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:47.184 11:57:50 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:47.184 11:57:50 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:47.185 11:57:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:47.185 11:57:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:47.185 11:57:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:47.185 /dev/nbd1 00:05:47.444 11:57:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:47.444 11:57:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:47.444 11:57:50 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:47.444 11:57:50 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:47.444 11:57:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:47.444 11:57:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:47.444 11:57:50 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:47.444 11:57:50 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:47.444 11:57:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:47.444 11:57:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:47.444 11:57:50 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:47.444 1+0 records in 00:05:47.444 1+0 records out 00:05:47.444 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000341367 s, 12.0 MB/s 00:05:47.444 11:57:50 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:47.444 11:57:50 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:47.444 11:57:50 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:47.444 11:57:50 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:47.444 11:57:50 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:47.444 11:57:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:47.444 11:57:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:47.444 11:57:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:47.444 11:57:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.444 11:57:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:47.444 11:57:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:47.444 { 00:05:47.444 "nbd_device": "/dev/nbd0", 00:05:47.444 "bdev_name": "Malloc0" 00:05:47.444 }, 00:05:47.444 { 00:05:47.444 "nbd_device": "/dev/nbd1", 00:05:47.444 "bdev_name": "Malloc1" 00:05:47.444 } 00:05:47.444 ]' 00:05:47.444 11:57:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:47.444 { 00:05:47.444 "nbd_device": "/dev/nbd0", 00:05:47.444 "bdev_name": "Malloc0" 00:05:47.444 }, 00:05:47.444 { 00:05:47.444 "nbd_device": "/dev/nbd1", 00:05:47.444 "bdev_name": "Malloc1" 00:05:47.444 } 00:05:47.444 ]' 00:05:47.444 11:57:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:47.704 11:57:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:47.704 /dev/nbd1' 00:05:47.704 11:57:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:47.704 11:57:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:47.704 /dev/nbd1' 00:05:47.704 11:57:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:47.704 11:57:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:47.704 11:57:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:47.704 11:57:50 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:47.704 11:57:50 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:47.704 11:57:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.704 11:57:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:47.704 11:57:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:47.704 11:57:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:47.704 11:57:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:47.704 11:57:50 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:47.704 256+0 records in 00:05:47.704 256+0 records out 00:05:47.704 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0146617 s, 71.5 MB/s 00:05:47.704 11:57:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:47.704 11:57:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:47.704 256+0 records in 00:05:47.704 256+0 records out 00:05:47.704 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0226832 s, 46.2 MB/s 00:05:47.704 11:57:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:47.704 11:57:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:47.704 256+0 records in 00:05:47.704 256+0 records out 00:05:47.704 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0286661 s, 36.6 MB/s 00:05:47.704 11:57:50 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:47.704 11:57:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.704 11:57:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:47.704 11:57:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:47.704 11:57:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:47.704 11:57:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:47.704 11:57:50 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:47.704 11:57:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:47.704 11:57:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:47.704 11:57:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:47.704 11:57:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:47.704 11:57:50 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:47.704 11:57:50 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:47.704 11:57:50 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.704 11:57:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.704 11:57:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:47.704 11:57:50 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:47.704 11:57:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:47.704 11:57:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:47.963 11:57:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:47.963 11:57:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:47.963 11:57:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:47.963 11:57:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:47.963 11:57:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:47.963 11:57:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:47.963 11:57:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:47.963 11:57:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:47.963 11:57:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:47.963 11:57:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:48.223 11:57:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:48.223 11:57:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:48.223 11:57:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:48.223 11:57:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:48.223 11:57:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:48.223 11:57:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:48.223 11:57:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:48.223 11:57:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:48.223 11:57:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:48.223 11:57:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.223 11:57:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:48.483 11:57:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:48.483 11:57:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:48.483 11:57:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:48.483 11:57:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:48.483 11:57:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:48.483 11:57:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:48.483 11:57:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:48.483 11:57:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:48.483 11:57:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:48.483 11:57:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:48.483 11:57:51 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:48.483 11:57:51 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:48.483 11:57:51 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:49.054 11:57:52 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:50.437 [2024-11-19 11:57:53.444276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:50.437 [2024-11-19 11:57:53.567720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.437 [2024-11-19 11:57:53.567721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.437 [2024-11-19 11:57:53.772234] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:50.437 [2024-11-19 11:57:53.772345] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:51.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:51.874 11:57:55 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58437 /var/tmp/spdk-nbd.sock 00:05:51.874 11:57:55 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58437 ']' 00:05:51.874 11:57:55 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:51.874 11:57:55 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:51.874 11:57:55 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:51.874 11:57:55 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:51.874 11:57:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:52.134 11:57:55 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:52.135 11:57:55 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:52.135 11:57:55 event.app_repeat -- event/event.sh@39 -- # killprocess 58437 00:05:52.135 11:57:55 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58437 ']' 00:05:52.135 11:57:55 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58437 00:05:52.135 11:57:55 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:52.135 11:57:55 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:52.135 11:57:55 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58437 00:05:52.135 killing process with pid 58437 00:05:52.135 11:57:55 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:52.135 11:57:55 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:52.135 11:57:55 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58437' 00:05:52.135 11:57:55 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58437 00:05:52.135 11:57:55 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58437 00:05:53.518 spdk_app_start is called in Round 0. 00:05:53.518 Shutdown signal received, stop current app iteration 00:05:53.518 Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 reinitialization... 00:05:53.518 spdk_app_start is called in Round 1. 00:05:53.518 Shutdown signal received, stop current app iteration 00:05:53.518 Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 reinitialization... 00:05:53.518 spdk_app_start is called in Round 2. 00:05:53.518 Shutdown signal received, stop current app iteration 00:05:53.518 Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 reinitialization... 00:05:53.518 spdk_app_start is called in Round 3. 00:05:53.519 Shutdown signal received, stop current app iteration 00:05:53.519 11:57:56 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:53.519 11:57:56 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:53.519 00:05:53.519 real 0m19.713s 00:05:53.519 user 0m42.490s 00:05:53.519 sys 0m2.555s 00:05:53.519 11:57:56 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.519 11:57:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:53.519 ************************************ 00:05:53.519 END TEST app_repeat 00:05:53.519 ************************************ 00:05:53.519 11:57:56 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:53.519 11:57:56 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:53.519 11:57:56 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:53.519 11:57:56 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.519 11:57:56 event -- common/autotest_common.sh@10 -- # set +x 00:05:53.519 ************************************ 00:05:53.519 START TEST cpu_locks 00:05:53.519 ************************************ 00:05:53.519 11:57:56 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:53.519 * Looking for test storage... 00:05:53.519 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:53.519 11:57:56 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:53.519 11:57:56 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:05:53.519 11:57:56 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:53.519 11:57:56 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:53.519 11:57:56 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:53.519 11:57:56 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:53.519 11:57:56 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:53.519 11:57:56 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:53.519 11:57:56 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:53.519 11:57:56 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:53.519 11:57:56 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:53.519 11:57:56 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:53.519 11:57:56 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:53.519 11:57:56 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:53.519 11:57:56 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:53.519 11:57:56 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:53.519 11:57:56 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:53.519 11:57:56 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:53.519 11:57:56 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:53.519 11:57:56 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:53.519 11:57:56 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:53.519 11:57:56 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:53.519 11:57:56 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:53.519 11:57:56 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:53.519 11:57:56 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:53.519 11:57:56 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:53.519 11:57:56 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:53.519 11:57:56 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:53.519 11:57:56 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:53.519 11:57:56 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:53.519 11:57:56 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:53.519 11:57:56 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:53.519 11:57:56 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:53.519 11:57:56 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:53.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.519 --rc genhtml_branch_coverage=1 00:05:53.519 --rc genhtml_function_coverage=1 00:05:53.519 --rc genhtml_legend=1 00:05:53.519 --rc geninfo_all_blocks=1 00:05:53.519 --rc geninfo_unexecuted_blocks=1 00:05:53.519 00:05:53.519 ' 00:05:53.519 11:57:56 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:53.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.519 --rc genhtml_branch_coverage=1 00:05:53.519 --rc genhtml_function_coverage=1 00:05:53.519 --rc genhtml_legend=1 00:05:53.519 --rc geninfo_all_blocks=1 00:05:53.519 --rc geninfo_unexecuted_blocks=1 00:05:53.519 00:05:53.519 ' 00:05:53.519 11:57:56 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:53.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.519 --rc genhtml_branch_coverage=1 00:05:53.519 --rc genhtml_function_coverage=1 00:05:53.519 --rc genhtml_legend=1 00:05:53.519 --rc geninfo_all_blocks=1 00:05:53.519 --rc geninfo_unexecuted_blocks=1 00:05:53.519 00:05:53.519 ' 00:05:53.519 11:57:56 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:53.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.519 --rc genhtml_branch_coverage=1 00:05:53.519 --rc genhtml_function_coverage=1 00:05:53.519 --rc genhtml_legend=1 00:05:53.519 --rc geninfo_all_blocks=1 00:05:53.519 --rc geninfo_unexecuted_blocks=1 00:05:53.519 00:05:53.519 ' 00:05:53.519 11:57:56 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:53.519 11:57:56 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:53.519 11:57:56 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:53.519 11:57:56 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:53.519 11:57:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:53.519 11:57:56 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.519 11:57:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:53.519 ************************************ 00:05:53.519 START TEST default_locks 00:05:53.519 ************************************ 00:05:53.519 11:57:56 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:53.519 11:57:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58890 00:05:53.519 11:57:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:53.519 11:57:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58890 00:05:53.519 11:57:56 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58890 ']' 00:05:53.519 11:57:56 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.519 11:57:56 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.519 11:57:56 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.519 11:57:56 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.519 11:57:56 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:53.779 [2024-11-19 11:57:56.923802] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:05:53.779 [2024-11-19 11:57:56.923922] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58890 ] 00:05:53.779 [2024-11-19 11:57:57.099462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.038 [2024-11-19 11:57:57.222659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.976 11:57:58 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:54.976 11:57:58 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:54.976 11:57:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58890 00:05:54.976 11:57:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58890 00:05:54.976 11:57:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:54.976 11:57:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58890 00:05:54.976 11:57:58 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58890 ']' 00:05:54.976 11:57:58 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58890 00:05:54.976 11:57:58 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:54.976 11:57:58 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:54.976 11:57:58 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58890 00:05:55.236 killing process with pid 58890 00:05:55.236 11:57:58 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:55.236 11:57:58 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:55.236 11:57:58 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58890' 00:05:55.236 11:57:58 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58890 00:05:55.236 11:57:58 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58890 00:05:57.774 11:58:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58890 00:05:57.774 11:58:00 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:57.774 11:58:00 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58890 00:05:57.774 11:58:00 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:57.774 11:58:00 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:57.774 11:58:00 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:57.774 11:58:00 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:57.774 11:58:00 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58890 00:05:57.774 11:58:00 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58890 ']' 00:05:57.774 11:58:00 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.774 11:58:00 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:57.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.774 11:58:00 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.774 11:58:00 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:57.774 11:58:00 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:57.774 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58890) - No such process 00:05:57.774 ERROR: process (pid: 58890) is no longer running 00:05:57.774 11:58:00 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:57.774 11:58:00 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:57.774 11:58:00 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:57.774 11:58:00 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:57.774 11:58:00 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:57.774 11:58:00 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:57.774 11:58:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:57.774 11:58:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:57.774 11:58:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:57.774 11:58:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:57.774 00:05:57.774 real 0m4.025s 00:05:57.774 user 0m3.983s 00:05:57.774 sys 0m0.584s 00:05:57.774 11:58:00 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:57.774 11:58:00 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:57.774 ************************************ 00:05:57.774 END TEST default_locks 00:05:57.774 ************************************ 00:05:57.774 11:58:00 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:57.774 11:58:00 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:57.774 11:58:00 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:57.774 11:58:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:57.774 ************************************ 00:05:57.774 START TEST default_locks_via_rpc 00:05:57.774 ************************************ 00:05:57.774 11:58:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:57.774 11:58:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58965 00:05:57.774 11:58:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:57.774 11:58:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58965 00:05:57.774 11:58:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58965 ']' 00:05:57.774 11:58:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.774 11:58:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:57.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.774 11:58:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.774 11:58:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:57.774 11:58:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.774 [2024-11-19 11:58:01.010106] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:05:57.774 [2024-11-19 11:58:01.010243] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58965 ] 00:05:58.034 [2024-11-19 11:58:01.172388] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.034 [2024-11-19 11:58:01.285124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.972 11:58:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:58.972 11:58:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:58.972 11:58:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:58.972 11:58:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.972 11:58:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.972 11:58:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.972 11:58:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:58.972 11:58:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:58.972 11:58:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:58.972 11:58:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:58.972 11:58:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:58.972 11:58:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.972 11:58:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.972 11:58:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.972 11:58:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58965 00:05:58.972 11:58:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:58.972 11:58:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58965 00:05:59.542 11:58:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58965 00:05:59.542 11:58:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58965 ']' 00:05:59.542 11:58:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58965 00:05:59.542 11:58:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:59.542 11:58:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:59.542 11:58:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58965 00:05:59.542 11:58:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:59.542 11:58:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:59.542 killing process with pid 58965 00:05:59.542 11:58:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58965' 00:05:59.542 11:58:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58965 00:05:59.542 11:58:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58965 00:06:02.081 00:06:02.081 real 0m4.299s 00:06:02.081 user 0m4.262s 00:06:02.081 sys 0m0.670s 00:06:02.081 11:58:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.081 11:58:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.081 ************************************ 00:06:02.081 END TEST default_locks_via_rpc 00:06:02.081 ************************************ 00:06:02.081 11:58:05 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:02.081 11:58:05 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:02.081 11:58:05 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.081 11:58:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:02.081 ************************************ 00:06:02.081 START TEST non_locking_app_on_locked_coremask 00:06:02.081 ************************************ 00:06:02.081 11:58:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:02.081 11:58:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59041 00:06:02.081 11:58:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:02.081 11:58:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59041 /var/tmp/spdk.sock 00:06:02.081 11:58:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59041 ']' 00:06:02.081 11:58:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.081 11:58:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:02.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.081 11:58:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.081 11:58:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:02.081 11:58:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:02.081 [2024-11-19 11:58:05.373755] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:06:02.081 [2024-11-19 11:58:05.373901] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59041 ] 00:06:02.341 [2024-11-19 11:58:05.534914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.341 [2024-11-19 11:58:05.658617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.281 11:58:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:03.281 11:58:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:03.281 11:58:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59061 00:06:03.281 11:58:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:03.281 11:58:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59061 /var/tmp/spdk2.sock 00:06:03.281 11:58:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59061 ']' 00:06:03.281 11:58:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:03.281 11:58:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:03.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:03.281 11:58:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:03.281 11:58:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:03.281 11:58:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.540 [2024-11-19 11:58:06.672000] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:06:03.540 [2024-11-19 11:58:06.672154] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59061 ] 00:06:03.540 [2024-11-19 11:58:06.849201] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:03.540 [2024-11-19 11:58:06.849272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.800 [2024-11-19 11:58:07.094553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.396 11:58:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.396 11:58:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:06.396 11:58:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59041 00:06:06.396 11:58:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59041 00:06:06.396 11:58:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:06.396 11:58:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59041 00:06:06.396 11:58:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59041 ']' 00:06:06.396 11:58:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59041 00:06:06.396 11:58:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:06.396 11:58:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:06.396 11:58:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59041 00:06:06.396 11:58:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:06.396 11:58:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:06.396 killing process with pid 59041 00:06:06.396 11:58:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59041' 00:06:06.396 11:58:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59041 00:06:06.396 11:58:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59041 00:06:11.678 11:58:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59061 00:06:11.678 11:58:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59061 ']' 00:06:11.678 11:58:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59061 00:06:11.678 11:58:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:11.678 11:58:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:11.678 11:58:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59061 00:06:11.678 11:58:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:11.678 11:58:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:11.678 killing process with pid 59061 00:06:11.678 11:58:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59061' 00:06:11.678 11:58:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59061 00:06:11.678 11:58:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59061 00:06:13.588 00:06:13.588 real 0m11.533s 00:06:13.588 user 0m11.798s 00:06:13.588 sys 0m1.244s 00:06:13.588 11:58:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.588 11:58:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:13.588 ************************************ 00:06:13.588 END TEST non_locking_app_on_locked_coremask 00:06:13.588 ************************************ 00:06:13.589 11:58:16 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:13.589 11:58:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:13.589 11:58:16 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.589 11:58:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:13.589 ************************************ 00:06:13.589 START TEST locking_app_on_unlocked_coremask 00:06:13.589 ************************************ 00:06:13.589 11:58:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:13.589 11:58:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59205 00:06:13.589 11:58:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:13.589 11:58:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59205 /var/tmp/spdk.sock 00:06:13.589 11:58:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59205 ']' 00:06:13.589 11:58:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.589 11:58:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:13.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.589 11:58:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.589 11:58:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:13.589 11:58:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:13.849 [2024-11-19 11:58:16.969821] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:06:13.849 [2024-11-19 11:58:16.969957] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59205 ] 00:06:13.849 [2024-11-19 11:58:17.127458] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:13.849 [2024-11-19 11:58:17.127507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.109 [2024-11-19 11:58:17.243161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.050 11:58:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:15.050 11:58:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:15.050 11:58:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59223 00:06:15.050 11:58:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59223 /var/tmp/spdk2.sock 00:06:15.050 11:58:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:15.050 11:58:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59223 ']' 00:06:15.050 11:58:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:15.050 11:58:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:15.050 11:58:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:15.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:15.050 11:58:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:15.050 11:58:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.050 [2024-11-19 11:58:18.195382] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:06:15.050 [2024-11-19 11:58:18.195552] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59223 ] 00:06:15.050 [2024-11-19 11:58:18.371614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.309 [2024-11-19 11:58:18.606124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.849 11:58:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.849 11:58:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:17.849 11:58:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59223 00:06:17.849 11:58:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59223 00:06:17.849 11:58:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:18.109 11:58:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59205 00:06:18.109 11:58:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59205 ']' 00:06:18.109 11:58:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59205 00:06:18.109 11:58:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:18.109 11:58:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:18.109 11:58:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59205 00:06:18.109 11:58:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:18.109 11:58:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:18.109 killing process with pid 59205 00:06:18.109 11:58:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59205' 00:06:18.109 11:58:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59205 00:06:18.109 11:58:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59205 00:06:23.391 11:58:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59223 00:06:23.391 11:58:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59223 ']' 00:06:23.391 11:58:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59223 00:06:23.391 11:58:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:23.391 11:58:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:23.391 11:58:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59223 00:06:23.391 killing process with pid 59223 00:06:23.391 11:58:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:23.391 11:58:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:23.391 11:58:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59223' 00:06:23.391 11:58:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59223 00:06:23.391 11:58:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59223 00:06:25.930 00:06:25.930 real 0m12.179s 00:06:25.930 user 0m12.379s 00:06:25.930 sys 0m1.301s 00:06:25.930 11:58:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.930 11:58:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:25.930 ************************************ 00:06:25.930 END TEST locking_app_on_unlocked_coremask 00:06:25.930 ************************************ 00:06:25.930 11:58:29 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:25.930 11:58:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:25.930 11:58:29 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.930 11:58:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:25.930 ************************************ 00:06:25.930 START TEST locking_app_on_locked_coremask 00:06:25.930 ************************************ 00:06:25.930 11:58:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:25.930 11:58:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59386 00:06:25.930 11:58:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:25.930 11:58:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59386 /var/tmp/spdk.sock 00:06:25.930 11:58:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59386 ']' 00:06:25.930 11:58:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.930 11:58:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:25.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.930 11:58:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.930 11:58:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:25.930 11:58:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:25.930 [2024-11-19 11:58:29.225429] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:06:25.930 [2024-11-19 11:58:29.225573] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59386 ] 00:06:26.189 [2024-11-19 11:58:29.405026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.189 [2024-11-19 11:58:29.549178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.571 11:58:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:27.571 11:58:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:27.571 11:58:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59408 00:06:27.571 11:58:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:27.571 11:58:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59408 /var/tmp/spdk2.sock 00:06:27.571 11:58:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:27.571 11:58:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59408 /var/tmp/spdk2.sock 00:06:27.571 11:58:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:27.571 11:58:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:27.571 11:58:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:27.571 11:58:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:27.571 11:58:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59408 /var/tmp/spdk2.sock 00:06:27.571 11:58:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59408 ']' 00:06:27.571 11:58:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:27.571 11:58:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:27.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:27.571 11:58:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:27.571 11:58:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:27.571 11:58:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:27.571 [2024-11-19 11:58:30.612715] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:06:27.571 [2024-11-19 11:58:30.612866] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59408 ] 00:06:27.571 [2024-11-19 11:58:30.784939] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59386 has claimed it. 00:06:27.571 [2024-11-19 11:58:30.785044] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:27.868 ERROR: process (pid: 59408) is no longer running 00:06:27.868 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59408) - No such process 00:06:27.868 11:58:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:27.868 11:58:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:27.868 11:58:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:27.868 11:58:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:27.868 11:58:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:27.868 11:58:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:27.868 11:58:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59386 00:06:27.868 11:58:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59386 00:06:27.869 11:58:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:28.450 11:58:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59386 00:06:28.450 11:58:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59386 ']' 00:06:28.450 11:58:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59386 00:06:28.450 11:58:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:28.450 11:58:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:28.450 11:58:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59386 00:06:28.450 11:58:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:28.450 11:58:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:28.450 killing process with pid 59386 00:06:28.450 11:58:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59386' 00:06:28.450 11:58:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59386 00:06:28.450 11:58:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59386 00:06:30.988 00:06:30.988 real 0m5.146s 00:06:30.988 user 0m5.124s 00:06:30.988 sys 0m0.962s 00:06:30.988 11:58:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.988 11:58:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:30.988 ************************************ 00:06:30.988 END TEST locking_app_on_locked_coremask 00:06:30.988 ************************************ 00:06:30.988 11:58:34 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:30.988 11:58:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:30.988 11:58:34 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.988 11:58:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:30.988 ************************************ 00:06:30.988 START TEST locking_overlapped_coremask 00:06:30.988 ************************************ 00:06:30.988 11:58:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:30.988 11:58:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59477 00:06:30.988 11:58:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:30.988 11:58:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59477 /var/tmp/spdk.sock 00:06:30.988 11:58:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59477 ']' 00:06:30.988 11:58:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.988 11:58:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:30.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.988 11:58:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.988 11:58:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:30.988 11:58:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:31.248 [2024-11-19 11:58:34.411668] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:06:31.248 [2024-11-19 11:58:34.411788] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59477 ] 00:06:31.248 [2024-11-19 11:58:34.566607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:31.507 [2024-11-19 11:58:34.710495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.507 [2024-11-19 11:58:34.710592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.507 [2024-11-19 11:58:34.710680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:32.446 11:58:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:32.446 11:58:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:32.446 11:58:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:32.446 11:58:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59501 00:06:32.446 11:58:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59501 /var/tmp/spdk2.sock 00:06:32.446 11:58:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:32.446 11:58:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59501 /var/tmp/spdk2.sock 00:06:32.446 11:58:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:32.446 11:58:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:32.446 11:58:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:32.446 11:58:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:32.446 11:58:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59501 /var/tmp/spdk2.sock 00:06:32.446 11:58:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59501 ']' 00:06:32.446 11:58:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:32.446 11:58:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:32.446 11:58:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:32.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:32.446 11:58:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:32.446 11:58:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:32.706 [2024-11-19 11:58:35.842302] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:06:32.706 [2024-11-19 11:58:35.842675] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59501 ] 00:06:32.706 [2024-11-19 11:58:36.036791] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59477 has claimed it. 00:06:32.706 [2024-11-19 11:58:36.036860] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:33.315 ERROR: process (pid: 59501) is no longer running 00:06:33.315 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59501) - No such process 00:06:33.315 11:58:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:33.315 11:58:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:33.315 11:58:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:33.315 11:58:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:33.315 11:58:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:33.315 11:58:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:33.315 11:58:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:33.315 11:58:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:33.315 11:58:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:33.316 11:58:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:33.316 11:58:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59477 00:06:33.316 11:58:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59477 ']' 00:06:33.316 11:58:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59477 00:06:33.316 11:58:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:33.316 11:58:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:33.316 11:58:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59477 00:06:33.316 11:58:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:33.316 11:58:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:33.316 11:58:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59477' 00:06:33.316 killing process with pid 59477 00:06:33.316 11:58:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59477 00:06:33.316 11:58:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59477 00:06:35.855 00:06:35.855 real 0m4.817s 00:06:35.855 user 0m12.944s 00:06:35.855 sys 0m0.788s 00:06:35.855 11:58:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:35.855 11:58:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:35.855 ************************************ 00:06:35.855 END TEST locking_overlapped_coremask 00:06:35.855 ************************************ 00:06:35.855 11:58:39 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:35.855 11:58:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:35.855 11:58:39 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:35.855 11:58:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:35.855 ************************************ 00:06:35.855 START TEST locking_overlapped_coremask_via_rpc 00:06:35.855 ************************************ 00:06:35.855 11:58:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:35.855 11:58:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59565 00:06:35.855 11:58:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:35.855 11:58:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59565 /var/tmp/spdk.sock 00:06:35.855 11:58:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59565 ']' 00:06:35.855 11:58:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.855 11:58:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:35.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.855 11:58:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.855 11:58:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:35.855 11:58:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.115 [2024-11-19 11:58:39.300770] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:06:36.115 [2024-11-19 11:58:39.300891] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59565 ] 00:06:36.115 [2024-11-19 11:58:39.481069] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:36.115 [2024-11-19 11:58:39.481124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:36.374 [2024-11-19 11:58:39.624088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:36.374 [2024-11-19 11:58:39.624174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:36.374 [2024-11-19 11:58:39.624130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.311 11:58:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:37.311 11:58:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:37.311 11:58:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59588 00:06:37.311 11:58:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:37.311 11:58:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59588 /var/tmp/spdk2.sock 00:06:37.311 11:58:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59588 ']' 00:06:37.311 11:58:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:37.311 11:58:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:37.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:37.311 11:58:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:37.311 11:58:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:37.311 11:58:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.570 [2024-11-19 11:58:40.740609] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:06:37.570 [2024-11-19 11:58:40.740725] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59588 ] 00:06:37.570 [2024-11-19 11:58:40.909608] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:37.570 [2024-11-19 11:58:40.909664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:37.830 [2024-11-19 11:58:41.195774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:37.830 [2024-11-19 11:58:41.199246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:37.830 [2024-11-19 11:58:41.199285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:40.399 11:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:40.399 11:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:40.399 11:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:40.399 11:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.399 11:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.399 11:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.399 11:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:40.399 11:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:40.399 11:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:40.399 11:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:40.399 11:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:40.399 11:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:40.399 11:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:40.399 11:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:40.399 11:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.399 11:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.399 [2024-11-19 11:58:43.284202] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59565 has claimed it. 00:06:40.399 request: 00:06:40.399 { 00:06:40.399 "method": "framework_enable_cpumask_locks", 00:06:40.399 "req_id": 1 00:06:40.399 } 00:06:40.399 Got JSON-RPC error response 00:06:40.399 response: 00:06:40.399 { 00:06:40.399 "code": -32603, 00:06:40.399 "message": "Failed to claim CPU core: 2" 00:06:40.399 } 00:06:40.399 11:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:40.399 11:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:40.399 11:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:40.399 11:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:40.399 11:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:40.399 11:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59565 /var/tmp/spdk.sock 00:06:40.399 11:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59565 ']' 00:06:40.399 11:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.399 11:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:40.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.399 11:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.399 11:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:40.399 11:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.399 11:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:40.399 11:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:40.399 11:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59588 /var/tmp/spdk2.sock 00:06:40.399 11:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59588 ']' 00:06:40.399 11:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:40.399 11:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:40.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:40.399 11:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:40.399 11:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:40.399 11:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.399 11:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:40.399 11:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:40.399 11:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:40.399 11:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:40.399 11:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:40.399 11:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:40.399 00:06:40.399 real 0m4.514s 00:06:40.399 user 0m1.243s 00:06:40.399 sys 0m0.184s 00:06:40.399 11:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.399 11:58:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.399 ************************************ 00:06:40.399 END TEST locking_overlapped_coremask_via_rpc 00:06:40.399 ************************************ 00:06:40.399 11:58:43 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:40.399 11:58:43 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59565 ]] 00:06:40.399 11:58:43 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59565 00:06:40.399 11:58:43 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59565 ']' 00:06:40.399 11:58:43 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59565 00:06:40.399 11:58:43 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:40.399 11:58:43 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:40.399 11:58:43 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59565 00:06:40.658 11:58:43 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:40.658 11:58:43 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:40.658 killing process with pid 59565 00:06:40.658 11:58:43 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59565' 00:06:40.658 11:58:43 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59565 00:06:40.658 11:58:43 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59565 00:06:43.188 11:58:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59588 ]] 00:06:43.188 11:58:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59588 00:06:43.188 11:58:46 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59588 ']' 00:06:43.188 11:58:46 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59588 00:06:43.188 11:58:46 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:43.188 11:58:46 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:43.188 11:58:46 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59588 00:06:43.188 11:58:46 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:43.188 11:58:46 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:43.188 11:58:46 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59588' 00:06:43.188 killing process with pid 59588 00:06:43.188 11:58:46 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59588 00:06:43.188 11:58:46 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59588 00:06:46.474 11:58:49 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:46.474 11:58:49 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:46.474 11:58:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59565 ]] 00:06:46.474 11:58:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59565 00:06:46.474 11:58:49 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59565 ']' 00:06:46.474 11:58:49 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59565 00:06:46.474 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59565) - No such process 00:06:46.474 Process with pid 59565 is not found 00:06:46.474 11:58:49 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59565 is not found' 00:06:46.474 11:58:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59588 ]] 00:06:46.474 11:58:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59588 00:06:46.474 11:58:49 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59588 ']' 00:06:46.474 11:58:49 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59588 00:06:46.474 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59588) - No such process 00:06:46.474 Process with pid 59588 is not found 00:06:46.474 11:58:49 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59588 is not found' 00:06:46.474 11:58:49 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:46.474 00:06:46.474 real 0m52.616s 00:06:46.474 user 1m29.540s 00:06:46.474 sys 0m7.309s 00:06:46.474 11:58:49 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.474 11:58:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:46.474 ************************************ 00:06:46.474 END TEST cpu_locks 00:06:46.474 ************************************ 00:06:46.474 00:06:46.474 real 1m24.830s 00:06:46.474 user 2m33.945s 00:06:46.474 sys 0m11.142s 00:06:46.474 11:58:49 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.474 11:58:49 event -- common/autotest_common.sh@10 -- # set +x 00:06:46.474 ************************************ 00:06:46.474 END TEST event 00:06:46.474 ************************************ 00:06:46.474 11:58:49 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:46.474 11:58:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:46.474 11:58:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.474 11:58:49 -- common/autotest_common.sh@10 -- # set +x 00:06:46.474 ************************************ 00:06:46.474 START TEST thread 00:06:46.474 ************************************ 00:06:46.474 11:58:49 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:46.474 * Looking for test storage... 00:06:46.474 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:46.474 11:58:49 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:46.474 11:58:49 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:46.474 11:58:49 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:46.474 11:58:49 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:46.474 11:58:49 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:46.474 11:58:49 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:46.474 11:58:49 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:46.474 11:58:49 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:46.474 11:58:49 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:46.474 11:58:49 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:46.474 11:58:49 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:46.474 11:58:49 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:46.474 11:58:49 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:46.474 11:58:49 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:46.474 11:58:49 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:46.474 11:58:49 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:46.474 11:58:49 thread -- scripts/common.sh@345 -- # : 1 00:06:46.474 11:58:49 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:46.474 11:58:49 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:46.474 11:58:49 thread -- scripts/common.sh@365 -- # decimal 1 00:06:46.474 11:58:49 thread -- scripts/common.sh@353 -- # local d=1 00:06:46.474 11:58:49 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:46.474 11:58:49 thread -- scripts/common.sh@355 -- # echo 1 00:06:46.474 11:58:49 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:46.474 11:58:49 thread -- scripts/common.sh@366 -- # decimal 2 00:06:46.474 11:58:49 thread -- scripts/common.sh@353 -- # local d=2 00:06:46.474 11:58:49 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:46.474 11:58:49 thread -- scripts/common.sh@355 -- # echo 2 00:06:46.474 11:58:49 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:46.474 11:58:49 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:46.474 11:58:49 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:46.474 11:58:49 thread -- scripts/common.sh@368 -- # return 0 00:06:46.474 11:58:49 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:46.474 11:58:49 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:46.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.474 --rc genhtml_branch_coverage=1 00:06:46.474 --rc genhtml_function_coverage=1 00:06:46.474 --rc genhtml_legend=1 00:06:46.474 --rc geninfo_all_blocks=1 00:06:46.474 --rc geninfo_unexecuted_blocks=1 00:06:46.474 00:06:46.474 ' 00:06:46.474 11:58:49 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:46.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.474 --rc genhtml_branch_coverage=1 00:06:46.474 --rc genhtml_function_coverage=1 00:06:46.474 --rc genhtml_legend=1 00:06:46.474 --rc geninfo_all_blocks=1 00:06:46.474 --rc geninfo_unexecuted_blocks=1 00:06:46.474 00:06:46.474 ' 00:06:46.474 11:58:49 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:46.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.474 --rc genhtml_branch_coverage=1 00:06:46.474 --rc genhtml_function_coverage=1 00:06:46.474 --rc genhtml_legend=1 00:06:46.474 --rc geninfo_all_blocks=1 00:06:46.474 --rc geninfo_unexecuted_blocks=1 00:06:46.474 00:06:46.474 ' 00:06:46.474 11:58:49 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:46.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.474 --rc genhtml_branch_coverage=1 00:06:46.474 --rc genhtml_function_coverage=1 00:06:46.474 --rc genhtml_legend=1 00:06:46.474 --rc geninfo_all_blocks=1 00:06:46.474 --rc geninfo_unexecuted_blocks=1 00:06:46.474 00:06:46.474 ' 00:06:46.474 11:58:49 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:46.474 11:58:49 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:46.474 11:58:49 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.474 11:58:49 thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.474 ************************************ 00:06:46.474 START TEST thread_poller_perf 00:06:46.475 ************************************ 00:06:46.475 11:58:49 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:46.475 [2024-11-19 11:58:49.617925] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:06:46.475 [2024-11-19 11:58:49.618473] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59789 ] 00:06:46.475 [2024-11-19 11:58:49.794535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.735 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:46.735 [2024-11-19 11:58:49.933972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.116 [2024-11-19T11:58:51.493Z] ====================================== 00:06:48.116 [2024-11-19T11:58:51.493Z] busy:2299056206 (cyc) 00:06:48.116 [2024-11-19T11:58:51.493Z] total_run_count: 396000 00:06:48.116 [2024-11-19T11:58:51.493Z] tsc_hz: 2290000000 (cyc) 00:06:48.116 [2024-11-19T11:58:51.493Z] ====================================== 00:06:48.116 [2024-11-19T11:58:51.493Z] poller_cost: 5805 (cyc), 2534 (nsec) 00:06:48.116 00:06:48.116 real 0m1.623s 00:06:48.116 user 0m1.396s 00:06:48.116 sys 0m0.118s 00:06:48.116 11:58:51 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:48.116 11:58:51 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:48.116 ************************************ 00:06:48.116 END TEST thread_poller_perf 00:06:48.116 ************************************ 00:06:48.116 11:58:51 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:48.116 11:58:51 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:48.116 11:58:51 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.116 11:58:51 thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.116 ************************************ 00:06:48.116 START TEST thread_poller_perf 00:06:48.116 ************************************ 00:06:48.116 11:58:51 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:48.116 [2024-11-19 11:58:51.315039] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:06:48.116 [2024-11-19 11:58:51.315181] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59831 ] 00:06:48.376 [2024-11-19 11:58:51.496764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.376 [2024-11-19 11:58:51.645650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.376 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:49.765 [2024-11-19T11:58:53.142Z] ====================================== 00:06:49.765 [2024-11-19T11:58:53.142Z] busy:2294181846 (cyc) 00:06:49.765 [2024-11-19T11:58:53.142Z] total_run_count: 4954000 00:06:49.765 [2024-11-19T11:58:53.142Z] tsc_hz: 2290000000 (cyc) 00:06:49.765 [2024-11-19T11:58:53.142Z] ====================================== 00:06:49.765 [2024-11-19T11:58:53.142Z] poller_cost: 463 (cyc), 202 (nsec) 00:06:49.765 00:06:49.765 real 0m1.603s 00:06:49.765 user 0m1.374s 00:06:49.765 sys 0m0.120s 00:06:49.765 11:58:52 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:49.765 11:58:52 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:49.765 ************************************ 00:06:49.765 END TEST thread_poller_perf 00:06:49.765 ************************************ 00:06:49.765 11:58:52 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:49.765 00:06:49.765 real 0m3.588s 00:06:49.765 user 0m2.931s 00:06:49.765 sys 0m0.459s 00:06:49.765 11:58:52 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:49.765 11:58:52 thread -- common/autotest_common.sh@10 -- # set +x 00:06:49.765 ************************************ 00:06:49.765 END TEST thread 00:06:49.765 ************************************ 00:06:49.765 11:58:52 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:49.765 11:58:52 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:49.765 11:58:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:49.765 11:58:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.765 11:58:52 -- common/autotest_common.sh@10 -- # set +x 00:06:49.765 ************************************ 00:06:49.765 START TEST app_cmdline 00:06:49.765 ************************************ 00:06:49.765 11:58:52 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:49.765 * Looking for test storage... 00:06:49.765 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:49.765 11:58:53 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:49.765 11:58:53 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:49.765 11:58:53 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:50.025 11:58:53 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:50.025 11:58:53 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:50.025 11:58:53 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:50.025 11:58:53 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:50.025 11:58:53 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:50.025 11:58:53 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:50.025 11:58:53 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:50.025 11:58:53 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:50.025 11:58:53 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:50.025 11:58:53 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:50.025 11:58:53 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:50.025 11:58:53 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:50.025 11:58:53 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:50.025 11:58:53 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:50.025 11:58:53 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:50.025 11:58:53 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:50.025 11:58:53 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:50.025 11:58:53 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:50.025 11:58:53 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:50.025 11:58:53 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:50.025 11:58:53 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:50.025 11:58:53 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:50.025 11:58:53 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:50.025 11:58:53 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:50.025 11:58:53 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:50.025 11:58:53 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:50.025 11:58:53 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:50.025 11:58:53 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:50.025 11:58:53 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:50.025 11:58:53 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:50.025 11:58:53 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:50.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.025 --rc genhtml_branch_coverage=1 00:06:50.025 --rc genhtml_function_coverage=1 00:06:50.025 --rc genhtml_legend=1 00:06:50.025 --rc geninfo_all_blocks=1 00:06:50.025 --rc geninfo_unexecuted_blocks=1 00:06:50.025 00:06:50.025 ' 00:06:50.025 11:58:53 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:50.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.025 --rc genhtml_branch_coverage=1 00:06:50.025 --rc genhtml_function_coverage=1 00:06:50.025 --rc genhtml_legend=1 00:06:50.025 --rc geninfo_all_blocks=1 00:06:50.025 --rc geninfo_unexecuted_blocks=1 00:06:50.025 00:06:50.025 ' 00:06:50.025 11:58:53 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:50.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.025 --rc genhtml_branch_coverage=1 00:06:50.025 --rc genhtml_function_coverage=1 00:06:50.025 --rc genhtml_legend=1 00:06:50.026 --rc geninfo_all_blocks=1 00:06:50.026 --rc geninfo_unexecuted_blocks=1 00:06:50.026 00:06:50.026 ' 00:06:50.026 11:58:53 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:50.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.026 --rc genhtml_branch_coverage=1 00:06:50.026 --rc genhtml_function_coverage=1 00:06:50.026 --rc genhtml_legend=1 00:06:50.026 --rc geninfo_all_blocks=1 00:06:50.026 --rc geninfo_unexecuted_blocks=1 00:06:50.026 00:06:50.026 ' 00:06:50.026 11:58:53 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:50.026 11:58:53 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59915 00:06:50.026 11:58:53 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:50.026 11:58:53 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59915 00:06:50.026 11:58:53 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59915 ']' 00:06:50.026 11:58:53 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.026 11:58:53 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:50.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.026 11:58:53 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.026 11:58:53 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:50.026 11:58:53 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:50.026 [2024-11-19 11:58:53.312293] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:06:50.026 [2024-11-19 11:58:53.312427] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59915 ] 00:06:50.285 [2024-11-19 11:58:53.490225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.285 [2024-11-19 11:58:53.608359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.226 11:58:54 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:51.226 11:58:54 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:51.226 11:58:54 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:51.486 { 00:06:51.486 "version": "SPDK v25.01-pre git sha1 dcc2ca8f3", 00:06:51.486 "fields": { 00:06:51.486 "major": 25, 00:06:51.486 "minor": 1, 00:06:51.486 "patch": 0, 00:06:51.486 "suffix": "-pre", 00:06:51.486 "commit": "dcc2ca8f3" 00:06:51.486 } 00:06:51.486 } 00:06:51.486 11:58:54 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:51.486 11:58:54 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:51.486 11:58:54 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:51.486 11:58:54 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:51.486 11:58:54 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:51.486 11:58:54 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.486 11:58:54 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:51.486 11:58:54 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:51.486 11:58:54 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:51.486 11:58:54 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.486 11:58:54 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:51.486 11:58:54 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:51.486 11:58:54 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:51.486 11:58:54 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:51.486 11:58:54 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:51.486 11:58:54 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:51.486 11:58:54 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:51.486 11:58:54 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:51.486 11:58:54 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:51.486 11:58:54 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:51.486 11:58:54 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:51.486 11:58:54 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:51.486 11:58:54 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:51.486 11:58:54 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:51.746 request: 00:06:51.746 { 00:06:51.746 "method": "env_dpdk_get_mem_stats", 00:06:51.746 "req_id": 1 00:06:51.746 } 00:06:51.746 Got JSON-RPC error response 00:06:51.746 response: 00:06:51.746 { 00:06:51.746 "code": -32601, 00:06:51.746 "message": "Method not found" 00:06:51.746 } 00:06:51.746 11:58:54 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:51.746 11:58:54 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:51.746 11:58:54 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:51.747 11:58:54 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:51.747 11:58:54 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59915 00:06:51.747 11:58:54 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59915 ']' 00:06:51.747 11:58:54 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59915 00:06:51.747 11:58:54 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:51.747 11:58:54 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:51.747 11:58:54 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59915 00:06:51.747 11:58:55 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:51.747 11:58:55 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:51.747 killing process with pid 59915 00:06:51.747 11:58:55 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59915' 00:06:51.747 11:58:55 app_cmdline -- common/autotest_common.sh@973 -- # kill 59915 00:06:51.747 11:58:55 app_cmdline -- common/autotest_common.sh@978 -- # wait 59915 00:06:54.296 00:06:54.296 real 0m4.358s 00:06:54.296 user 0m4.554s 00:06:54.296 sys 0m0.635s 00:06:54.296 11:58:57 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:54.296 11:58:57 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:54.296 ************************************ 00:06:54.296 END TEST app_cmdline 00:06:54.296 ************************************ 00:06:54.296 11:58:57 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:54.296 11:58:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:54.296 11:58:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:54.296 11:58:57 -- common/autotest_common.sh@10 -- # set +x 00:06:54.296 ************************************ 00:06:54.296 START TEST version 00:06:54.296 ************************************ 00:06:54.296 11:58:57 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:54.296 * Looking for test storage... 00:06:54.296 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:54.296 11:58:57 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:54.296 11:58:57 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:54.296 11:58:57 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:54.296 11:58:57 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:54.296 11:58:57 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:54.296 11:58:57 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:54.296 11:58:57 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:54.296 11:58:57 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:54.296 11:58:57 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:54.296 11:58:57 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:54.296 11:58:57 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:54.296 11:58:57 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:54.296 11:58:57 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:54.296 11:58:57 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:54.296 11:58:57 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:54.296 11:58:57 version -- scripts/common.sh@344 -- # case "$op" in 00:06:54.296 11:58:57 version -- scripts/common.sh@345 -- # : 1 00:06:54.296 11:58:57 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:54.296 11:58:57 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:54.296 11:58:57 version -- scripts/common.sh@365 -- # decimal 1 00:06:54.296 11:58:57 version -- scripts/common.sh@353 -- # local d=1 00:06:54.296 11:58:57 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:54.296 11:58:57 version -- scripts/common.sh@355 -- # echo 1 00:06:54.296 11:58:57 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:54.296 11:58:57 version -- scripts/common.sh@366 -- # decimal 2 00:06:54.296 11:58:57 version -- scripts/common.sh@353 -- # local d=2 00:06:54.296 11:58:57 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:54.296 11:58:57 version -- scripts/common.sh@355 -- # echo 2 00:06:54.296 11:58:57 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:54.296 11:58:57 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:54.296 11:58:57 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:54.296 11:58:57 version -- scripts/common.sh@368 -- # return 0 00:06:54.296 11:58:57 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:54.296 11:58:57 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:54.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.296 --rc genhtml_branch_coverage=1 00:06:54.296 --rc genhtml_function_coverage=1 00:06:54.296 --rc genhtml_legend=1 00:06:54.296 --rc geninfo_all_blocks=1 00:06:54.296 --rc geninfo_unexecuted_blocks=1 00:06:54.296 00:06:54.296 ' 00:06:54.296 11:58:57 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:54.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.296 --rc genhtml_branch_coverage=1 00:06:54.296 --rc genhtml_function_coverage=1 00:06:54.296 --rc genhtml_legend=1 00:06:54.296 --rc geninfo_all_blocks=1 00:06:54.296 --rc geninfo_unexecuted_blocks=1 00:06:54.296 00:06:54.296 ' 00:06:54.296 11:58:57 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:54.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.296 --rc genhtml_branch_coverage=1 00:06:54.296 --rc genhtml_function_coverage=1 00:06:54.296 --rc genhtml_legend=1 00:06:54.296 --rc geninfo_all_blocks=1 00:06:54.296 --rc geninfo_unexecuted_blocks=1 00:06:54.296 00:06:54.296 ' 00:06:54.296 11:58:57 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:54.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.296 --rc genhtml_branch_coverage=1 00:06:54.296 --rc genhtml_function_coverage=1 00:06:54.296 --rc genhtml_legend=1 00:06:54.296 --rc geninfo_all_blocks=1 00:06:54.296 --rc geninfo_unexecuted_blocks=1 00:06:54.296 00:06:54.296 ' 00:06:54.296 11:58:57 version -- app/version.sh@17 -- # get_header_version major 00:06:54.296 11:58:57 version -- app/version.sh@14 -- # cut -f2 00:06:54.296 11:58:57 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:54.296 11:58:57 version -- app/version.sh@14 -- # tr -d '"' 00:06:54.296 11:58:57 version -- app/version.sh@17 -- # major=25 00:06:54.296 11:58:57 version -- app/version.sh@18 -- # get_header_version minor 00:06:54.296 11:58:57 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:54.296 11:58:57 version -- app/version.sh@14 -- # cut -f2 00:06:54.296 11:58:57 version -- app/version.sh@14 -- # tr -d '"' 00:06:54.296 11:58:57 version -- app/version.sh@18 -- # minor=1 00:06:54.296 11:58:57 version -- app/version.sh@19 -- # get_header_version patch 00:06:54.296 11:58:57 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:54.555 11:58:57 version -- app/version.sh@14 -- # cut -f2 00:06:54.555 11:58:57 version -- app/version.sh@14 -- # tr -d '"' 00:06:54.555 11:58:57 version -- app/version.sh@19 -- # patch=0 00:06:54.555 11:58:57 version -- app/version.sh@20 -- # get_header_version suffix 00:06:54.555 11:58:57 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:54.555 11:58:57 version -- app/version.sh@14 -- # cut -f2 00:06:54.555 11:58:57 version -- app/version.sh@14 -- # tr -d '"' 00:06:54.555 11:58:57 version -- app/version.sh@20 -- # suffix=-pre 00:06:54.555 11:58:57 version -- app/version.sh@22 -- # version=25.1 00:06:54.555 11:58:57 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:54.555 11:58:57 version -- app/version.sh@28 -- # version=25.1rc0 00:06:54.555 11:58:57 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:54.555 11:58:57 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:54.555 11:58:57 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:54.555 11:58:57 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:54.555 00:06:54.555 real 0m0.319s 00:06:54.555 user 0m0.185s 00:06:54.555 sys 0m0.189s 00:06:54.555 11:58:57 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:54.555 11:58:57 version -- common/autotest_common.sh@10 -- # set +x 00:06:54.555 ************************************ 00:06:54.555 END TEST version 00:06:54.555 ************************************ 00:06:54.555 11:58:57 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:54.555 11:58:57 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:06:54.555 11:58:57 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:54.555 11:58:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:54.555 11:58:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:54.555 11:58:57 -- common/autotest_common.sh@10 -- # set +x 00:06:54.555 ************************************ 00:06:54.555 START TEST bdev_raid 00:06:54.555 ************************************ 00:06:54.555 11:58:57 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:54.555 * Looking for test storage... 00:06:54.555 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:54.555 11:58:57 bdev_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:54.555 11:58:57 bdev_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:06:54.555 11:58:57 bdev_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:54.814 11:58:57 bdev_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:54.814 11:58:57 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:54.815 11:58:57 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:54.815 11:58:57 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:54.815 11:58:57 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:06:54.815 11:58:57 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:06:54.815 11:58:57 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:06:54.815 11:58:57 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:06:54.815 11:58:57 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:06:54.815 11:58:57 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:06:54.815 11:58:57 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:06:54.815 11:58:57 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:54.815 11:58:57 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:06:54.815 11:58:57 bdev_raid -- scripts/common.sh@345 -- # : 1 00:06:54.815 11:58:57 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:54.815 11:58:57 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:54.815 11:58:57 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:06:54.815 11:58:58 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:06:54.815 11:58:58 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:54.815 11:58:58 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:06:54.815 11:58:58 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:06:54.815 11:58:58 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:06:54.815 11:58:58 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:06:54.815 11:58:58 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:54.815 11:58:58 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:06:54.815 11:58:58 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:06:54.815 11:58:58 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:54.815 11:58:58 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:54.815 11:58:58 bdev_raid -- scripts/common.sh@368 -- # return 0 00:06:54.815 11:58:58 bdev_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:54.815 11:58:58 bdev_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:54.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.815 --rc genhtml_branch_coverage=1 00:06:54.815 --rc genhtml_function_coverage=1 00:06:54.815 --rc genhtml_legend=1 00:06:54.815 --rc geninfo_all_blocks=1 00:06:54.815 --rc geninfo_unexecuted_blocks=1 00:06:54.815 00:06:54.815 ' 00:06:54.815 11:58:58 bdev_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:54.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.815 --rc genhtml_branch_coverage=1 00:06:54.815 --rc genhtml_function_coverage=1 00:06:54.815 --rc genhtml_legend=1 00:06:54.815 --rc geninfo_all_blocks=1 00:06:54.815 --rc geninfo_unexecuted_blocks=1 00:06:54.815 00:06:54.815 ' 00:06:54.815 11:58:58 bdev_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:54.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.815 --rc genhtml_branch_coverage=1 00:06:54.815 --rc genhtml_function_coverage=1 00:06:54.815 --rc genhtml_legend=1 00:06:54.815 --rc geninfo_all_blocks=1 00:06:54.815 --rc geninfo_unexecuted_blocks=1 00:06:54.815 00:06:54.815 ' 00:06:54.815 11:58:58 bdev_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:54.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.815 --rc genhtml_branch_coverage=1 00:06:54.815 --rc genhtml_function_coverage=1 00:06:54.815 --rc genhtml_legend=1 00:06:54.815 --rc geninfo_all_blocks=1 00:06:54.815 --rc geninfo_unexecuted_blocks=1 00:06:54.815 00:06:54.815 ' 00:06:54.815 11:58:58 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:54.815 11:58:58 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:06:54.815 11:58:58 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:06:54.815 11:58:58 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:06:54.815 11:58:58 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:06:54.815 11:58:58 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:06:54.815 11:58:58 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:06:54.815 11:58:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:54.815 11:58:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:54.815 11:58:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:54.815 ************************************ 00:06:54.815 START TEST raid1_resize_data_offset_test 00:06:54.815 ************************************ 00:06:54.815 11:58:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:06:54.815 11:58:58 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=60102 00:06:54.815 Process raid pid: 60102 00:06:54.815 11:58:58 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:54.815 11:58:58 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 60102' 00:06:54.815 11:58:58 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 60102 00:06:54.815 11:58:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 60102 ']' 00:06:54.815 11:58:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.815 11:58:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:54.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.815 11:58:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.815 11:58:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:54.815 11:58:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.815 [2024-11-19 11:58:58.132885] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:06:54.815 [2024-11-19 11:58:58.133025] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:55.074 [2024-11-19 11:58:58.310309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.075 [2024-11-19 11:58:58.422550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.335 [2024-11-19 11:58:58.620036] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:55.335 [2024-11-19 11:58:58.620076] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:55.594 11:58:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:55.594 11:58:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:06:55.594 11:58:58 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:06:55.594 11:58:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.594 11:58:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.854 malloc0 00:06:55.854 11:58:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.854 11:58:59 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:06:55.854 11:58:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.854 11:58:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.854 malloc1 00:06:55.854 11:58:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.854 11:58:59 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:06:55.854 11:58:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.854 11:58:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.854 null0 00:06:55.854 11:58:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.854 11:58:59 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:06:55.854 11:58:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.854 11:58:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.854 [2024-11-19 11:58:59.120714] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:06:55.854 [2024-11-19 11:58:59.122475] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:06:55.854 [2024-11-19 11:58:59.122522] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:06:55.855 [2024-11-19 11:58:59.122661] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:55.855 [2024-11-19 11:58:59.122674] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:06:55.855 [2024-11-19 11:58:59.122954] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:55.855 [2024-11-19 11:58:59.123162] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:55.855 [2024-11-19 11:58:59.123184] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:55.855 [2024-11-19 11:58:59.123346] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:55.855 11:58:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.855 11:58:59 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:55.855 11:58:59 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:55.855 11:58:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.855 11:58:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.855 11:58:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.855 11:58:59 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:06:55.855 11:58:59 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:06:55.855 11:58:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.855 11:58:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.855 [2024-11-19 11:58:59.180658] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:06:55.855 11:58:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.855 11:58:59 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:06:55.855 11:58:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.855 11:58:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.423 malloc2 00:06:56.423 11:58:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.423 11:58:59 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:06:56.423 11:58:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.423 11:58:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.423 [2024-11-19 11:58:59.717062] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:06:56.423 [2024-11-19 11:58:59.734134] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:56.423 11:58:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.423 [2024-11-19 11:58:59.736128] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:06:56.423 11:58:59 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:56.423 11:58:59 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:56.424 11:58:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.424 11:58:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.424 11:58:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.424 11:58:59 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:06:56.424 11:58:59 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 60102 00:06:56.424 11:58:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 60102 ']' 00:06:56.424 11:58:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 60102 00:06:56.424 11:58:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:06:56.424 11:58:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:56.683 11:58:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60102 00:06:56.683 11:58:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:56.683 11:58:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:56.683 killing process with pid 60102 00:06:56.683 11:58:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60102' 00:06:56.683 11:58:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 60102 00:06:56.683 11:58:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 60102 00:06:56.683 [2024-11-19 11:58:59.837039] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:56.683 [2024-11-19 11:58:59.838098] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:06:56.683 [2024-11-19 11:58:59.838160] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:56.683 [2024-11-19 11:58:59.838177] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:06:56.683 [2024-11-19 11:58:59.873440] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:56.683 [2024-11-19 11:58:59.873793] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:56.683 [2024-11-19 11:58:59.873815] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:58.591 [2024-11-19 11:59:01.630867] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:59.529 11:59:02 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:06:59.529 00:06:59.529 real 0m4.691s 00:06:59.529 user 0m4.591s 00:06:59.529 sys 0m0.537s 00:06:59.529 11:59:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.529 11:59:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.529 ************************************ 00:06:59.529 END TEST raid1_resize_data_offset_test 00:06:59.529 ************************************ 00:06:59.529 11:59:02 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:06:59.529 11:59:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:59.529 11:59:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.529 11:59:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:59.529 ************************************ 00:06:59.529 START TEST raid0_resize_superblock_test 00:06:59.529 ************************************ 00:06:59.529 11:59:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:06:59.529 11:59:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:06:59.529 11:59:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60186 00:06:59.529 Process raid pid: 60186 00:06:59.529 11:59:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:59.529 11:59:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60186' 00:06:59.529 11:59:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60186 00:06:59.529 11:59:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60186 ']' 00:06:59.529 11:59:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.529 11:59:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:59.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.529 11:59:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.529 11:59:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:59.529 11:59:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.529 [2024-11-19 11:59:02.892837] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:06:59.529 [2024-11-19 11:59:02.892972] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:59.822 [2024-11-19 11:59:03.074988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.084 [2024-11-19 11:59:03.189778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.084 [2024-11-19 11:59:03.394352] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:00.084 [2024-11-19 11:59:03.394389] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:00.344 11:59:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:00.344 11:59:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:00.344 11:59:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:07:00.344 11:59:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.344 11:59:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.914 malloc0 00:07:00.914 11:59:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.914 11:59:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:00.914 11:59:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.914 11:59:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.914 [2024-11-19 11:59:04.254571] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:00.914 [2024-11-19 11:59:04.254663] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:00.914 [2024-11-19 11:59:04.254690] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:00.914 [2024-11-19 11:59:04.254702] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:00.914 [2024-11-19 11:59:04.257120] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:00.914 [2024-11-19 11:59:04.257164] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:00.914 pt0 00:07:00.914 11:59:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.914 11:59:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:07:00.914 11:59:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.914 11:59:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.175 6bd7708a-05ff-4806-9a82-e1772aafa36c 00:07:01.175 11:59:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.175 11:59:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:07:01.175 11:59:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.175 11:59:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.175 9c8f0c00-1b3a-429e-99ca-a6c04928d081 00:07:01.175 11:59:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.175 11:59:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:07:01.175 11:59:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.175 11:59:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.175 b6d541f4-251a-4757-8581-8f33dbf43c78 00:07:01.175 11:59:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.175 11:59:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:07:01.175 11:59:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:07:01.175 11:59:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.175 11:59:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.175 [2024-11-19 11:59:04.387603] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 9c8f0c00-1b3a-429e-99ca-a6c04928d081 is claimed 00:07:01.175 [2024-11-19 11:59:04.387766] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev b6d541f4-251a-4757-8581-8f33dbf43c78 is claimed 00:07:01.175 [2024-11-19 11:59:04.387914] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:01.175 [2024-11-19 11:59:04.387945] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:07:01.175 [2024-11-19 11:59:04.388258] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:01.175 [2024-11-19 11:59:04.388476] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:01.175 [2024-11-19 11:59:04.388495] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:01.175 [2024-11-19 11:59:04.388686] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:01.175 11:59:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.175 11:59:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:01.175 11:59:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:07:01.175 11:59:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.175 11:59:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.175 11:59:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.175 11:59:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:07:01.175 11:59:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:01.175 11:59:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:07:01.175 11:59:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.175 11:59:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.175 11:59:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.175 11:59:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:07:01.175 11:59:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:01.175 11:59:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:07:01.175 11:59:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:01.175 11:59:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:01.175 11:59:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.175 11:59:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.175 [2024-11-19 11:59:04.499653] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:01.175 11:59:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.175 11:59:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:01.175 11:59:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:01.175 11:59:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:07:01.175 11:59:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:07:01.175 11:59:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.175 11:59:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.175 [2024-11-19 11:59:04.527583] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:01.175 [2024-11-19 11:59:04.527623] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '9c8f0c00-1b3a-429e-99ca-a6c04928d081' was resized: old size 131072, new size 204800 00:07:01.175 11:59:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.175 11:59:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:07:01.175 11:59:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.175 11:59:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.175 [2024-11-19 11:59:04.539532] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:01.175 [2024-11-19 11:59:04.539571] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'b6d541f4-251a-4757-8581-8f33dbf43c78' was resized: old size 131072, new size 204800 00:07:01.175 [2024-11-19 11:59:04.539602] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:07:01.175 11:59:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.175 11:59:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:01.175 11:59:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:07:01.175 11:59:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.175 11:59:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.435 11:59:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.436 11:59:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:07:01.436 11:59:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:01.436 11:59:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.436 11:59:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.436 11:59:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:07:01.436 11:59:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.436 11:59:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:07:01.436 11:59:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:01.436 11:59:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:01.436 11:59:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.436 11:59:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.436 11:59:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:01.436 11:59:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:07:01.436 [2024-11-19 11:59:04.651397] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:01.436 11:59:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.436 11:59:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:01.436 11:59:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:01.436 11:59:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:07:01.436 11:59:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:07:01.436 11:59:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.436 11:59:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.436 [2024-11-19 11:59:04.699128] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:07:01.436 [2024-11-19 11:59:04.699209] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:07:01.436 [2024-11-19 11:59:04.699222] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:01.436 [2024-11-19 11:59:04.699239] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:07:01.436 [2024-11-19 11:59:04.699351] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:01.436 [2024-11-19 11:59:04.699394] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:01.436 [2024-11-19 11:59:04.699408] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:01.436 11:59:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.436 11:59:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:01.436 11:59:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.436 11:59:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.436 [2024-11-19 11:59:04.710944] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:01.436 [2024-11-19 11:59:04.711008] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:01.436 [2024-11-19 11:59:04.711030] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:07:01.436 [2024-11-19 11:59:04.711040] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:01.436 [2024-11-19 11:59:04.713192] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:01.436 [2024-11-19 11:59:04.713227] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:01.436 [2024-11-19 11:59:04.714799] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 9c8f0c00-1b3a-429e-99ca-a6c04928d081 00:07:01.436 [2024-11-19 11:59:04.714861] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 9c8f0c00-1b3a-429e-99ca-a6c04928d081 is claimed 00:07:01.436 [2024-11-19 11:59:04.714979] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev b6d541f4-251a-4757-8581-8f33dbf43c78 00:07:01.436 [2024-11-19 11:59:04.715011] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev b6d541f4-251a-4757-8581-8f33dbf43c78 is claimed 00:07:01.436 [2024-11-19 11:59:04.715143] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev b6d541f4-251a-4757-8581-8f33dbf43c78 (2) smaller than existing raid bdev Raid (3) 00:07:01.436 [2024-11-19 11:59:04.715176] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 9c8f0c00-1b3a-429e-99ca-a6c04928d081: File exists 00:07:01.436 [2024-11-19 11:59:04.715228] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:01.436 [2024-11-19 11:59:04.715239] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:07:01.436 [2024-11-19 11:59:04.715476] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:07:01.436 [2024-11-19 11:59:04.715638] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:01.436 [2024-11-19 11:59:04.715653] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:07:01.436 [2024-11-19 11:59:04.715845] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:01.436 pt0 00:07:01.436 11:59:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.436 11:59:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:07:01.436 11:59:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.436 11:59:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.436 11:59:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.436 11:59:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:01.436 11:59:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:01.436 11:59:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:01.436 11:59:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:07:01.436 11:59:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.436 11:59:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.436 [2024-11-19 11:59:04.739392] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:01.436 11:59:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.436 11:59:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:01.436 11:59:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:01.436 11:59:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:07:01.436 11:59:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60186 00:07:01.436 11:59:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60186 ']' 00:07:01.436 11:59:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60186 00:07:01.436 11:59:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:01.436 11:59:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:01.436 11:59:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60186 00:07:01.696 11:59:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:01.696 11:59:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:01.696 killing process with pid 60186 00:07:01.696 11:59:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60186' 00:07:01.696 11:59:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60186 00:07:01.696 [2024-11-19 11:59:04.816877] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:01.696 [2024-11-19 11:59:04.816940] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:01.696 [2024-11-19 11:59:04.816981] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:01.696 [2024-11-19 11:59:04.816989] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:07:01.696 11:59:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60186 00:07:03.077 [2024-11-19 11:59:06.186126] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:04.014 11:59:07 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:04.014 00:07:04.014 real 0m4.459s 00:07:04.014 user 0m4.632s 00:07:04.014 sys 0m0.580s 00:07:04.014 11:59:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.014 11:59:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.014 ************************************ 00:07:04.014 END TEST raid0_resize_superblock_test 00:07:04.014 ************************************ 00:07:04.014 11:59:07 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:07:04.014 11:59:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:04.014 11:59:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.014 11:59:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:04.014 ************************************ 00:07:04.014 START TEST raid1_resize_superblock_test 00:07:04.014 ************************************ 00:07:04.014 11:59:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:07:04.014 11:59:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:07:04.014 11:59:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60284 00:07:04.014 11:59:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:04.014 Process raid pid: 60284 00:07:04.014 11:59:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60284' 00:07:04.014 11:59:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60284 00:07:04.014 11:59:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60284 ']' 00:07:04.014 11:59:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.014 11:59:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:04.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.014 11:59:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.014 11:59:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:04.014 11:59:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.273 [2024-11-19 11:59:07.409649] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:07:04.273 [2024-11-19 11:59:07.409782] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:04.273 [2024-11-19 11:59:07.585946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.533 [2024-11-19 11:59:07.697457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.533 [2024-11-19 11:59:07.895620] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:04.533 [2024-11-19 11:59:07.895670] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:05.101 11:59:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:05.101 11:59:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:05.101 11:59:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:07:05.101 11:59:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.101 11:59:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.360 malloc0 00:07:05.360 11:59:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.622 11:59:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:05.622 11:59:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.622 11:59:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.622 [2024-11-19 11:59:08.741919] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:05.622 [2024-11-19 11:59:08.741999] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:05.622 [2024-11-19 11:59:08.742032] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:05.622 [2024-11-19 11:59:08.742047] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:05.622 [2024-11-19 11:59:08.744134] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:05.622 [2024-11-19 11:59:08.744170] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:05.622 pt0 00:07:05.622 11:59:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.622 11:59:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:07:05.622 11:59:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.622 11:59:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.622 02398bbe-c1ba-41a4-bb74-6a83d0018cb1 00:07:05.622 11:59:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.622 11:59:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:07:05.622 11:59:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.622 11:59:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.622 f8dabf26-e78c-4c9a-bcce-d7c4c0245d4b 00:07:05.622 11:59:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.622 11:59:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:07:05.622 11:59:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.622 11:59:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.622 b34ef627-d705-4cc6-a8b5-46d07b264c4f 00:07:05.622 11:59:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.622 11:59:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:07:05.622 11:59:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:07:05.622 11:59:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.622 11:59:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.622 [2024-11-19 11:59:08.874663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev f8dabf26-e78c-4c9a-bcce-d7c4c0245d4b is claimed 00:07:05.622 [2024-11-19 11:59:08.874752] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev b34ef627-d705-4cc6-a8b5-46d07b264c4f is claimed 00:07:05.622 [2024-11-19 11:59:08.874884] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:05.622 [2024-11-19 11:59:08.874904] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:07:05.622 [2024-11-19 11:59:08.875182] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:05.622 [2024-11-19 11:59:08.875393] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:05.622 [2024-11-19 11:59:08.875411] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:05.622 [2024-11-19 11:59:08.875584] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:05.622 11:59:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.622 11:59:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:05.622 11:59:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.622 11:59:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:07:05.622 11:59:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.622 11:59:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.622 11:59:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:07:05.622 11:59:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:07:05.622 11:59:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:05.622 11:59:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.622 11:59:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.622 11:59:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.622 11:59:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:07:05.622 11:59:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:05.622 11:59:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:05.622 11:59:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:05.622 11:59:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:07:05.622 11:59:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.622 11:59:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.622 [2024-11-19 11:59:08.974810] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:05.622 11:59:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.886 11:59:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:05.886 11:59:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:05.886 11:59:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:07:05.886 11:59:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:07:05.886 11:59:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.886 11:59:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.886 [2024-11-19 11:59:09.018701] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:05.886 [2024-11-19 11:59:09.018739] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'f8dabf26-e78c-4c9a-bcce-d7c4c0245d4b' was resized: old size 131072, new size 204800 00:07:05.886 11:59:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.886 11:59:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:07:05.886 11:59:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.886 11:59:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.886 [2024-11-19 11:59:09.030653] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:05.886 [2024-11-19 11:59:09.030694] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'b34ef627-d705-4cc6-a8b5-46d07b264c4f' was resized: old size 131072, new size 204800 00:07:05.886 [2024-11-19 11:59:09.030751] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:07:05.886 11:59:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.886 11:59:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:05.886 11:59:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.886 11:59:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.886 11:59:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:07:05.886 11:59:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.886 11:59:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:07:05.886 11:59:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:05.886 11:59:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.886 11:59:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.886 11:59:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:07:05.886 11:59:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.886 11:59:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:07:05.886 11:59:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:05.886 11:59:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:05.886 11:59:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:05.886 11:59:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:07:05.886 11:59:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.886 11:59:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.886 [2024-11-19 11:59:09.138393] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:05.886 11:59:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.886 11:59:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:05.886 11:59:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:05.886 11:59:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:07:05.886 11:59:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:07:05.886 11:59:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.886 11:59:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.886 [2024-11-19 11:59:09.186135] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:07:05.886 [2024-11-19 11:59:09.186206] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:07:05.887 [2024-11-19 11:59:09.186231] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:07:05.887 [2024-11-19 11:59:09.186384] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:05.887 [2024-11-19 11:59:09.186597] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:05.887 [2024-11-19 11:59:09.186668] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:05.887 [2024-11-19 11:59:09.186683] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:05.887 11:59:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.887 11:59:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:05.887 11:59:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.887 11:59:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.887 [2024-11-19 11:59:09.198071] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:05.887 [2024-11-19 11:59:09.198146] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:05.887 [2024-11-19 11:59:09.198168] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:07:05.887 [2024-11-19 11:59:09.198181] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:05.887 [2024-11-19 11:59:09.200303] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:05.887 [2024-11-19 11:59:09.200340] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:05.887 [2024-11-19 11:59:09.202012] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev f8dabf26-e78c-4c9a-bcce-d7c4c0245d4b 00:07:05.887 [2024-11-19 11:59:09.202100] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev f8dabf26-e78c-4c9a-bcce-d7c4c0245d4b is claimed 00:07:05.887 [2024-11-19 11:59:09.202222] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev b34ef627-d705-4cc6-a8b5-46d07b264c4f 00:07:05.887 [2024-11-19 11:59:09.202242] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev b34ef627-d705-4cc6-a8b5-46d07b264c4f is claimed 00:07:05.887 [2024-11-19 11:59:09.202364] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev b34ef627-d705-4cc6-a8b5-46d07b264c4f (2) smaller than existing raid bdev Raid (3) 00:07:05.887 [2024-11-19 11:59:09.202397] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev f8dabf26-e78c-4c9a-bcce-d7c4c0245d4b: File exists 00:07:05.887 [2024-11-19 11:59:09.202445] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:05.887 [2024-11-19 11:59:09.202456] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:05.887 [2024-11-19 11:59:09.202683] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:07:05.887 [2024-11-19 11:59:09.202856] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:05.887 [2024-11-19 11:59:09.202872] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:07:05.887 [2024-11-19 11:59:09.203073] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:05.887 pt0 00:07:05.887 11:59:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.887 11:59:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:07:05.887 11:59:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.887 11:59:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.887 11:59:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.887 11:59:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:05.887 11:59:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:07:05.887 11:59:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:05.887 11:59:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:05.887 11:59:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.887 11:59:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.887 [2024-11-19 11:59:09.226610] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:05.887 11:59:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.887 11:59:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:05.887 11:59:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:05.887 11:59:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:07:05.887 11:59:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60284 00:07:05.887 11:59:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60284 ']' 00:07:05.887 11:59:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60284 00:07:05.887 11:59:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:06.154 11:59:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:06.154 11:59:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60284 00:07:06.154 11:59:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:06.154 11:59:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:06.154 11:59:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60284' 00:07:06.154 killing process with pid 60284 00:07:06.154 11:59:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60284 00:07:06.154 [2024-11-19 11:59:09.284446] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:06.154 [2024-11-19 11:59:09.284535] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:06.154 [2024-11-19 11:59:09.284602] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:06.154 [2024-11-19 11:59:09.284616] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:07:06.154 11:59:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60284 00:07:07.536 [2024-11-19 11:59:10.676511] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:08.477 11:59:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:08.477 00:07:08.477 real 0m4.421s 00:07:08.477 user 0m4.635s 00:07:08.477 sys 0m0.525s 00:07:08.477 11:59:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.477 11:59:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.478 ************************************ 00:07:08.478 END TEST raid1_resize_superblock_test 00:07:08.478 ************************************ 00:07:08.478 11:59:11 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:07:08.478 11:59:11 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:07:08.478 11:59:11 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:07:08.478 11:59:11 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:07:08.478 11:59:11 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:07:08.478 11:59:11 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:07:08.478 11:59:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:08.478 11:59:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:08.478 11:59:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:08.478 ************************************ 00:07:08.478 START TEST raid_function_test_raid0 00:07:08.478 ************************************ 00:07:08.478 11:59:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:07:08.478 11:59:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:07:08.478 11:59:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:08.478 11:59:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:08.478 11:59:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60387 00:07:08.478 Process raid pid: 60387 00:07:08.478 11:59:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:08.478 11:59:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60387' 00:07:08.478 11:59:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60387 00:07:08.478 11:59:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 60387 ']' 00:07:08.478 11:59:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.478 11:59:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:08.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.478 11:59:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.478 11:59:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:08.478 11:59:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:08.738 [2024-11-19 11:59:11.918711] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:07:08.738 [2024-11-19 11:59:11.918819] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:08.738 [2024-11-19 11:59:12.091951] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.998 [2024-11-19 11:59:12.208563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.258 [2024-11-19 11:59:12.409494] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:09.258 [2024-11-19 11:59:12.409532] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:09.518 11:59:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:09.518 11:59:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:07:09.518 11:59:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:09.518 11:59:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.518 11:59:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:09.518 Base_1 00:07:09.518 11:59:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.518 11:59:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:09.518 11:59:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.518 11:59:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:09.518 Base_2 00:07:09.518 11:59:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.518 11:59:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:07:09.519 11:59:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.519 11:59:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:09.519 [2024-11-19 11:59:12.839861] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:09.519 [2024-11-19 11:59:12.841638] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:09.519 [2024-11-19 11:59:12.841724] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:09.519 [2024-11-19 11:59:12.841736] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:09.519 [2024-11-19 11:59:12.841980] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:09.519 [2024-11-19 11:59:12.842134] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:09.519 [2024-11-19 11:59:12.842147] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:07:09.519 [2024-11-19 11:59:12.842290] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:09.519 11:59:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.519 11:59:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:09.519 11:59:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:09.519 11:59:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.519 11:59:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:09.519 11:59:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.779 11:59:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:09.780 11:59:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:09.780 11:59:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:09.780 11:59:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:09.780 11:59:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:09.780 11:59:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:09.780 11:59:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:09.780 11:59:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:09.780 11:59:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:07:09.780 11:59:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:09.780 11:59:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:09.780 11:59:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:09.780 [2024-11-19 11:59:13.111493] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:09.780 /dev/nbd0 00:07:09.780 11:59:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:09.780 11:59:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:09.780 11:59:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:09.780 11:59:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:07:10.041 11:59:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:10.041 11:59:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:10.041 11:59:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:10.041 11:59:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:07:10.041 11:59:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:10.041 11:59:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:10.041 11:59:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:10.041 1+0 records in 00:07:10.041 1+0 records out 00:07:10.041 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000472875 s, 8.7 MB/s 00:07:10.041 11:59:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:10.041 11:59:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:07:10.041 11:59:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:10.041 11:59:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:10.041 11:59:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:07:10.041 11:59:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:10.041 11:59:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:10.041 11:59:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:10.041 11:59:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:10.041 11:59:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:10.041 11:59:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:10.041 { 00:07:10.041 "nbd_device": "/dev/nbd0", 00:07:10.041 "bdev_name": "raid" 00:07:10.041 } 00:07:10.041 ]' 00:07:10.301 11:59:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:10.301 { 00:07:10.301 "nbd_device": "/dev/nbd0", 00:07:10.301 "bdev_name": "raid" 00:07:10.301 } 00:07:10.301 ]' 00:07:10.301 11:59:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:10.301 11:59:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:10.301 11:59:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:10.301 11:59:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:10.301 11:59:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:07:10.301 11:59:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:07:10.301 11:59:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:07:10.301 11:59:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:10.301 11:59:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:10.301 11:59:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:10.301 11:59:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:10.301 11:59:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:10.301 11:59:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:10.301 11:59:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:10.301 11:59:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:10.301 11:59:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:10.301 11:59:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:10.301 11:59:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:10.301 11:59:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:10.301 11:59:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:10.302 11:59:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:10.302 11:59:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:10.302 11:59:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:10.302 11:59:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:10.302 11:59:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:10.302 4096+0 records in 00:07:10.302 4096+0 records out 00:07:10.302 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.034434 s, 60.9 MB/s 00:07:10.302 11:59:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:10.562 4096+0 records in 00:07:10.562 4096+0 records out 00:07:10.562 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.211542 s, 9.9 MB/s 00:07:10.562 11:59:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:10.562 11:59:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:10.562 11:59:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:10.562 11:59:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:10.562 11:59:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:10.562 11:59:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:10.562 11:59:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:10.562 128+0 records in 00:07:10.562 128+0 records out 00:07:10.562 65536 bytes (66 kB, 64 KiB) copied, 0.00124274 s, 52.7 MB/s 00:07:10.562 11:59:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:10.562 11:59:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:10.562 11:59:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:10.562 11:59:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:10.562 11:59:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:10.562 11:59:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:10.562 11:59:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:10.562 11:59:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:10.562 2035+0 records in 00:07:10.562 2035+0 records out 00:07:10.562 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0156966 s, 66.4 MB/s 00:07:10.562 11:59:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:10.562 11:59:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:10.562 11:59:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:10.562 11:59:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:10.562 11:59:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:10.562 11:59:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:10.562 11:59:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:10.562 11:59:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:10.562 456+0 records in 00:07:10.562 456+0 records out 00:07:10.562 233472 bytes (233 kB, 228 KiB) copied, 0.00398555 s, 58.6 MB/s 00:07:10.562 11:59:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:10.562 11:59:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:10.562 11:59:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:10.562 11:59:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:10.562 11:59:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:10.562 11:59:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:07:10.562 11:59:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:10.562 11:59:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:10.562 11:59:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:10.562 11:59:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:10.562 11:59:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:07:10.562 11:59:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:10.562 11:59:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:10.823 11:59:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:10.823 [2024-11-19 11:59:14.079519] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:10.823 11:59:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:10.823 11:59:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:10.823 11:59:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:10.823 11:59:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:10.823 11:59:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:10.823 11:59:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:07:10.823 11:59:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:07:10.823 11:59:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:10.823 11:59:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:10.823 11:59:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:11.084 11:59:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:11.084 11:59:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:11.084 11:59:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:11.084 11:59:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:11.084 11:59:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:11.084 11:59:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:11.084 11:59:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:07:11.084 11:59:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:07:11.084 11:59:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:11.084 11:59:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:07:11.084 11:59:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:11.084 11:59:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60387 00:07:11.084 11:59:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 60387 ']' 00:07:11.084 11:59:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 60387 00:07:11.084 11:59:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:07:11.084 11:59:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:11.084 11:59:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60387 00:07:11.084 11:59:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:11.084 11:59:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:11.084 killing process with pid 60387 00:07:11.084 11:59:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60387' 00:07:11.084 11:59:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 60387 00:07:11.084 [2024-11-19 11:59:14.393877] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:11.084 [2024-11-19 11:59:14.393989] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:11.084 [2024-11-19 11:59:14.394053] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:11.084 [2024-11-19 11:59:14.394069] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:07:11.084 11:59:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 60387 00:07:11.344 [2024-11-19 11:59:14.600854] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:12.725 11:59:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:07:12.725 00:07:12.725 real 0m3.851s 00:07:12.725 user 0m4.491s 00:07:12.725 sys 0m0.995s 00:07:12.725 11:59:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:12.725 11:59:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:12.725 ************************************ 00:07:12.725 END TEST raid_function_test_raid0 00:07:12.725 ************************************ 00:07:12.725 11:59:15 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:07:12.725 11:59:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:12.725 11:59:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:12.725 11:59:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:12.725 ************************************ 00:07:12.725 START TEST raid_function_test_concat 00:07:12.725 ************************************ 00:07:12.725 11:59:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:07:12.725 11:59:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:07:12.725 11:59:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:12.725 11:59:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:12.725 11:59:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60511 00:07:12.725 11:59:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:12.725 Process raid pid: 60511 00:07:12.725 11:59:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60511' 00:07:12.725 11:59:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60511 00:07:12.725 11:59:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 60511 ']' 00:07:12.725 11:59:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.725 11:59:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:12.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.725 11:59:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.725 11:59:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:12.725 11:59:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:12.725 [2024-11-19 11:59:15.838535] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:07:12.725 [2024-11-19 11:59:15.838644] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:12.725 [2024-11-19 11:59:16.014938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.986 [2024-11-19 11:59:16.144797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.986 [2024-11-19 11:59:16.346196] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:12.986 [2024-11-19 11:59:16.346243] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:13.556 11:59:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:13.556 11:59:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:07:13.556 11:59:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:13.556 11:59:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.556 11:59:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:13.556 Base_1 00:07:13.556 11:59:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.556 11:59:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:13.556 11:59:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.556 11:59:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:13.556 Base_2 00:07:13.556 11:59:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.556 11:59:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:07:13.556 11:59:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.556 11:59:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:13.556 [2024-11-19 11:59:16.825131] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:13.556 [2024-11-19 11:59:16.826889] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:13.556 [2024-11-19 11:59:16.826955] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:13.556 [2024-11-19 11:59:16.826966] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:13.556 [2024-11-19 11:59:16.827260] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:13.556 [2024-11-19 11:59:16.827406] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:13.556 [2024-11-19 11:59:16.827415] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:07:13.556 [2024-11-19 11:59:16.827572] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:13.556 11:59:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.556 11:59:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:13.556 11:59:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:13.556 11:59:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.556 11:59:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:13.556 11:59:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.556 11:59:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:13.556 11:59:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:13.556 11:59:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:13.556 11:59:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:13.556 11:59:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:13.556 11:59:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:13.556 11:59:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:13.556 11:59:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:13.556 11:59:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:07:13.556 11:59:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:13.556 11:59:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:13.556 11:59:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:13.817 [2024-11-19 11:59:17.052810] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:13.817 /dev/nbd0 00:07:13.817 11:59:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:13.817 11:59:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:13.817 11:59:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:13.817 11:59:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:07:13.817 11:59:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:13.817 11:59:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:13.817 11:59:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:13.817 11:59:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:07:13.817 11:59:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:13.817 11:59:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:13.817 11:59:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:13.817 1+0 records in 00:07:13.817 1+0 records out 00:07:13.817 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000404053 s, 10.1 MB/s 00:07:13.817 11:59:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.817 11:59:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:07:13.817 11:59:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.817 11:59:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:13.817 11:59:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:07:13.817 11:59:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:13.817 11:59:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:13.817 11:59:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:13.817 11:59:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:13.817 11:59:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:14.077 11:59:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:14.077 { 00:07:14.077 "nbd_device": "/dev/nbd0", 00:07:14.077 "bdev_name": "raid" 00:07:14.077 } 00:07:14.077 ]' 00:07:14.077 11:59:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:14.077 { 00:07:14.077 "nbd_device": "/dev/nbd0", 00:07:14.077 "bdev_name": "raid" 00:07:14.077 } 00:07:14.078 ]' 00:07:14.078 11:59:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:14.078 11:59:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:14.078 11:59:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:14.078 11:59:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:14.078 11:59:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:07:14.078 11:59:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:07:14.078 11:59:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:07:14.078 11:59:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:14.078 11:59:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:14.078 11:59:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:14.078 11:59:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:14.078 11:59:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:14.078 11:59:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:14.078 11:59:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:14.078 11:59:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:14.078 11:59:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:14.078 11:59:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:14.078 11:59:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:14.078 11:59:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:14.078 11:59:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:14.078 11:59:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:14.078 11:59:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:14.078 11:59:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:14.078 11:59:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:14.078 11:59:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:14.078 4096+0 records in 00:07:14.078 4096+0 records out 00:07:14.078 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0357635 s, 58.6 MB/s 00:07:14.078 11:59:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:14.338 4096+0 records in 00:07:14.338 4096+0 records out 00:07:14.338 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.195332 s, 10.7 MB/s 00:07:14.338 11:59:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:14.338 11:59:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:14.338 11:59:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:14.338 11:59:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:14.338 11:59:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:14.338 11:59:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:14.338 11:59:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:14.338 128+0 records in 00:07:14.338 128+0 records out 00:07:14.338 65536 bytes (66 kB, 64 KiB) copied, 0.00112535 s, 58.2 MB/s 00:07:14.338 11:59:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:14.338 11:59:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:14.338 11:59:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:14.338 11:59:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:14.338 11:59:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:14.338 11:59:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:14.338 11:59:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:14.338 11:59:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:14.338 2035+0 records in 00:07:14.338 2035+0 records out 00:07:14.338 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0138544 s, 75.2 MB/s 00:07:14.338 11:59:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:14.598 11:59:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:14.598 11:59:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:14.598 11:59:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:14.598 11:59:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:14.598 11:59:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:14.598 11:59:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:14.598 11:59:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:14.598 456+0 records in 00:07:14.598 456+0 records out 00:07:14.598 233472 bytes (233 kB, 228 KiB) copied, 0.00352335 s, 66.3 MB/s 00:07:14.598 11:59:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:14.598 11:59:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:14.598 11:59:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:14.598 11:59:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:14.598 11:59:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:14.598 11:59:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:07:14.598 11:59:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:14.598 11:59:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:14.598 11:59:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:14.598 11:59:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:14.598 11:59:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:07:14.598 11:59:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:14.598 11:59:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:14.858 11:59:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:14.858 [2024-11-19 11:59:17.978932] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:14.858 11:59:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:14.858 11:59:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:14.858 11:59:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:14.858 11:59:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:14.858 11:59:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:14.858 11:59:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:07:14.858 11:59:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:07:14.858 11:59:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:14.858 11:59:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:14.858 11:59:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:14.858 11:59:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:14.858 11:59:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:14.858 11:59:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:15.118 11:59:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:15.118 11:59:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:15.118 11:59:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:15.118 11:59:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:07:15.118 11:59:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:07:15.118 11:59:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:15.118 11:59:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:07:15.118 11:59:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:15.118 11:59:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60511 00:07:15.118 11:59:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 60511 ']' 00:07:15.118 11:59:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 60511 00:07:15.118 11:59:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:07:15.118 11:59:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:15.118 11:59:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60511 00:07:15.118 killing process with pid 60511 00:07:15.118 11:59:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:15.118 11:59:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:15.118 11:59:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60511' 00:07:15.118 11:59:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 60511 00:07:15.118 [2024-11-19 11:59:18.284717] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:15.118 [2024-11-19 11:59:18.284819] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:15.118 [2024-11-19 11:59:18.284877] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:15.118 11:59:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 60511 00:07:15.118 [2024-11-19 11:59:18.284889] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:07:15.118 [2024-11-19 11:59:18.490306] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:16.505 11:59:19 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:07:16.505 00:07:16.505 real 0m3.849s 00:07:16.505 user 0m4.430s 00:07:16.505 sys 0m1.001s 00:07:16.505 11:59:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:16.505 11:59:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:16.505 ************************************ 00:07:16.505 END TEST raid_function_test_concat 00:07:16.505 ************************************ 00:07:16.505 11:59:19 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:07:16.505 11:59:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:16.505 11:59:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:16.505 11:59:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:16.505 ************************************ 00:07:16.505 START TEST raid0_resize_test 00:07:16.505 ************************************ 00:07:16.505 11:59:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:07:16.505 11:59:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:07:16.505 11:59:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:16.505 11:59:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:16.505 11:59:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:16.505 11:59:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:16.505 11:59:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:16.505 11:59:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:16.505 11:59:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:16.505 11:59:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60633 00:07:16.505 11:59:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:16.505 11:59:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60633' 00:07:16.505 Process raid pid: 60633 00:07:16.505 11:59:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60633 00:07:16.505 11:59:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60633 ']' 00:07:16.505 11:59:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.505 11:59:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:16.505 11:59:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.505 11:59:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:16.505 11:59:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.505 [2024-11-19 11:59:19.749811] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:07:16.505 [2024-11-19 11:59:19.750023] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:16.764 [2024-11-19 11:59:19.902097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.764 [2024-11-19 11:59:20.021382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.024 [2024-11-19 11:59:20.229765] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:17.024 [2024-11-19 11:59:20.229886] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:17.284 11:59:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:17.284 11:59:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:07:17.284 11:59:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:17.284 11:59:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.284 11:59:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.284 Base_1 00:07:17.284 11:59:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.284 11:59:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:17.284 11:59:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.284 11:59:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.284 Base_2 00:07:17.284 11:59:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.284 11:59:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:07:17.284 11:59:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:17.284 11:59:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.284 11:59:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.284 [2024-11-19 11:59:20.629271] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:17.284 [2024-11-19 11:59:20.631051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:17.284 [2024-11-19 11:59:20.631144] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:17.284 [2024-11-19 11:59:20.631156] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:17.284 [2024-11-19 11:59:20.631448] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:17.284 [2024-11-19 11:59:20.631582] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:17.284 [2024-11-19 11:59:20.631591] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:17.284 [2024-11-19 11:59:20.631755] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:17.284 11:59:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.284 11:59:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:17.284 11:59:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.284 11:59:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.284 [2024-11-19 11:59:20.641230] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:17.284 [2024-11-19 11:59:20.641335] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:17.284 true 00:07:17.284 11:59:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.284 11:59:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:17.284 11:59:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.284 11:59:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.284 11:59:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:17.284 [2024-11-19 11:59:20.653371] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:17.544 11:59:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.544 11:59:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:07:17.544 11:59:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:07:17.544 11:59:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:07:17.544 11:59:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:07:17.544 11:59:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:07:17.544 11:59:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:17.544 11:59:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.544 11:59:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.544 [2024-11-19 11:59:20.701163] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:17.544 [2024-11-19 11:59:20.701254] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:17.544 [2024-11-19 11:59:20.701317] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:07:17.544 true 00:07:17.544 11:59:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.544 11:59:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:17.544 11:59:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.544 11:59:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.544 11:59:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:17.544 [2024-11-19 11:59:20.717328] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:17.544 11:59:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.544 11:59:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:07:17.544 11:59:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:07:17.544 11:59:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:07:17.544 11:59:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:07:17.544 11:59:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:07:17.544 11:59:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60633 00:07:17.544 11:59:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60633 ']' 00:07:17.544 11:59:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 60633 00:07:17.544 11:59:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:07:17.544 11:59:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:17.544 11:59:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60633 00:07:17.544 11:59:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:17.544 11:59:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:17.544 11:59:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60633' 00:07:17.544 killing process with pid 60633 00:07:17.544 11:59:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 60633 00:07:17.544 [2024-11-19 11:59:20.799058] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:17.544 [2024-11-19 11:59:20.799268] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:17.544 11:59:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 60633 00:07:17.544 [2024-11-19 11:59:20.799354] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:17.544 [2024-11-19 11:59:20.799364] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:17.544 [2024-11-19 11:59:20.816253] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:18.921 11:59:22 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:18.921 00:07:18.921 real 0m2.411s 00:07:18.921 user 0m2.564s 00:07:18.921 sys 0m0.325s 00:07:18.921 11:59:22 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.921 11:59:22 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.921 ************************************ 00:07:18.921 END TEST raid0_resize_test 00:07:18.921 ************************************ 00:07:18.921 11:59:22 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:07:18.921 11:59:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:18.921 11:59:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.921 11:59:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:18.921 ************************************ 00:07:18.921 START TEST raid1_resize_test 00:07:18.921 ************************************ 00:07:18.921 11:59:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:07:18.921 11:59:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:07:18.921 11:59:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:18.921 11:59:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:18.921 11:59:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:18.921 11:59:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:18.921 11:59:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:18.921 11:59:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:18.921 11:59:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:18.921 11:59:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60695 00:07:18.921 11:59:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:18.921 11:59:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60695' 00:07:18.921 Process raid pid: 60695 00:07:18.921 11:59:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60695 00:07:18.921 11:59:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60695 ']' 00:07:18.921 11:59:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.921 11:59:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:18.921 11:59:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.922 11:59:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:18.922 11:59:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.922 [2024-11-19 11:59:22.234521] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:07:18.922 [2024-11-19 11:59:22.234701] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:19.181 [2024-11-19 11:59:22.416242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.441 [2024-11-19 11:59:22.557327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.441 [2024-11-19 11:59:22.799945] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:19.441 [2024-11-19 11:59:22.800120] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:20.010 11:59:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:20.010 11:59:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:07:20.010 11:59:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:20.010 11:59:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.010 11:59:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.010 Base_1 00:07:20.010 11:59:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.010 11:59:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:20.010 11:59:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.010 11:59:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.010 Base_2 00:07:20.010 11:59:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.010 11:59:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:07:20.010 11:59:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:20.010 11:59:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.010 11:59:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.010 [2024-11-19 11:59:23.164279] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:20.010 [2024-11-19 11:59:23.166382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:20.010 [2024-11-19 11:59:23.166447] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:20.010 [2024-11-19 11:59:23.166461] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:20.010 [2024-11-19 11:59:23.166752] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:20.010 [2024-11-19 11:59:23.166902] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:20.011 [2024-11-19 11:59:23.166917] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:20.011 [2024-11-19 11:59:23.167127] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:20.011 11:59:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.011 11:59:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:20.011 11:59:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.011 11:59:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.011 [2024-11-19 11:59:23.176242] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:20.011 [2024-11-19 11:59:23.176278] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:20.011 true 00:07:20.011 11:59:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.011 11:59:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:20.011 11:59:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:20.011 11:59:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.011 11:59:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.011 [2024-11-19 11:59:23.192397] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:20.011 11:59:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.011 11:59:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:07:20.011 11:59:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:07:20.011 11:59:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:07:20.011 11:59:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:07:20.011 11:59:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:07:20.011 11:59:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:20.011 11:59:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.011 11:59:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.011 [2024-11-19 11:59:23.240126] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:20.011 [2024-11-19 11:59:23.240201] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:20.011 [2024-11-19 11:59:23.240275] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:07:20.011 true 00:07:20.011 11:59:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.011 11:59:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:20.011 11:59:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:20.011 11:59:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.011 11:59:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.011 [2024-11-19 11:59:23.256345] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:20.011 11:59:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.011 11:59:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:07:20.011 11:59:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:07:20.011 11:59:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:07:20.011 11:59:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:07:20.011 11:59:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:07:20.011 11:59:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60695 00:07:20.011 11:59:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60695 ']' 00:07:20.011 11:59:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 60695 00:07:20.011 11:59:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:07:20.011 11:59:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:20.011 11:59:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60695 00:07:20.011 11:59:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:20.011 11:59:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:20.011 11:59:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60695' 00:07:20.011 killing process with pid 60695 00:07:20.011 11:59:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 60695 00:07:20.011 11:59:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 60695 00:07:20.011 [2024-11-19 11:59:23.347440] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:20.011 [2024-11-19 11:59:23.347601] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:20.011 [2024-11-19 11:59:23.348231] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:20.011 [2024-11-19 11:59:23.348313] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:20.011 [2024-11-19 11:59:23.368612] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:21.401 11:59:24 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:21.401 00:07:21.401 real 0m2.532s 00:07:21.401 user 0m2.710s 00:07:21.401 sys 0m0.369s 00:07:21.401 11:59:24 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.401 11:59:24 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.401 ************************************ 00:07:21.401 END TEST raid1_resize_test 00:07:21.401 ************************************ 00:07:21.401 11:59:24 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:07:21.401 11:59:24 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:21.401 11:59:24 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:07:21.401 11:59:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:21.401 11:59:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:21.401 11:59:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:21.401 ************************************ 00:07:21.401 START TEST raid_state_function_test 00:07:21.401 ************************************ 00:07:21.401 11:59:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:07:21.401 11:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:21.401 11:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:21.401 11:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:21.401 11:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:21.401 11:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:21.401 11:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:21.401 11:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:21.401 11:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:21.401 11:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:21.401 11:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:21.401 11:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:21.401 11:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:21.401 11:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:21.401 11:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:21.401 11:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:21.401 11:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:21.401 11:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:21.401 11:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:21.401 11:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:21.401 11:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:21.401 11:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:21.401 11:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:21.401 11:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:21.401 11:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60757 00:07:21.401 11:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:21.401 11:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60757' 00:07:21.401 Process raid pid: 60757 00:07:21.401 11:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60757 00:07:21.401 11:59:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 60757 ']' 00:07:21.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.401 11:59:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.401 11:59:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:21.401 11:59:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.401 11:59:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:21.401 11:59:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.661 [2024-11-19 11:59:24.848394] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:07:21.661 [2024-11-19 11:59:24.848534] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:21.661 [2024-11-19 11:59:25.027374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.921 [2024-11-19 11:59:25.169587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.180 [2024-11-19 11:59:25.415976] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:22.180 [2024-11-19 11:59:25.416126] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:22.440 11:59:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:22.440 11:59:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:22.440 11:59:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:22.440 11:59:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.440 11:59:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.440 [2024-11-19 11:59:25.767035] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:22.440 [2024-11-19 11:59:25.767148] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:22.440 [2024-11-19 11:59:25.767196] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:22.440 [2024-11-19 11:59:25.767227] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:22.440 11:59:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.440 11:59:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:22.440 11:59:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:22.440 11:59:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:22.440 11:59:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:22.440 11:59:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:22.440 11:59:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:22.440 11:59:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:22.440 11:59:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:22.440 11:59:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:22.440 11:59:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:22.440 11:59:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:22.440 11:59:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:22.440 11:59:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.440 11:59:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.440 11:59:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.700 11:59:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:22.700 "name": "Existed_Raid", 00:07:22.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:22.700 "strip_size_kb": 64, 00:07:22.700 "state": "configuring", 00:07:22.700 "raid_level": "raid0", 00:07:22.700 "superblock": false, 00:07:22.700 "num_base_bdevs": 2, 00:07:22.700 "num_base_bdevs_discovered": 0, 00:07:22.700 "num_base_bdevs_operational": 2, 00:07:22.700 "base_bdevs_list": [ 00:07:22.700 { 00:07:22.700 "name": "BaseBdev1", 00:07:22.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:22.700 "is_configured": false, 00:07:22.700 "data_offset": 0, 00:07:22.700 "data_size": 0 00:07:22.700 }, 00:07:22.700 { 00:07:22.700 "name": "BaseBdev2", 00:07:22.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:22.700 "is_configured": false, 00:07:22.700 "data_offset": 0, 00:07:22.700 "data_size": 0 00:07:22.700 } 00:07:22.700 ] 00:07:22.700 }' 00:07:22.700 11:59:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:22.700 11:59:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.958 11:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:22.958 11:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.958 11:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.958 [2024-11-19 11:59:26.230179] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:22.958 [2024-11-19 11:59:26.230258] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:22.958 11:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.958 11:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:22.958 11:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.958 11:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.958 [2024-11-19 11:59:26.242145] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:22.958 [2024-11-19 11:59:26.242220] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:22.958 [2024-11-19 11:59:26.242250] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:22.958 [2024-11-19 11:59:26.242274] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:22.958 11:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.958 11:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:22.958 11:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.958 11:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.958 [2024-11-19 11:59:26.290258] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:22.958 BaseBdev1 00:07:22.958 11:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.958 11:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:22.958 11:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:22.958 11:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:22.958 11:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:22.958 11:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:22.958 11:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:22.958 11:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:22.958 11:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.958 11:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.958 11:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.958 11:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:22.958 11:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.958 11:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.958 [ 00:07:22.958 { 00:07:22.958 "name": "BaseBdev1", 00:07:22.958 "aliases": [ 00:07:22.959 "735c0c56-6f04-4804-a607-400fd1eaac1a" 00:07:22.959 ], 00:07:22.959 "product_name": "Malloc disk", 00:07:22.959 "block_size": 512, 00:07:22.959 "num_blocks": 65536, 00:07:22.959 "uuid": "735c0c56-6f04-4804-a607-400fd1eaac1a", 00:07:22.959 "assigned_rate_limits": { 00:07:22.959 "rw_ios_per_sec": 0, 00:07:22.959 "rw_mbytes_per_sec": 0, 00:07:22.959 "r_mbytes_per_sec": 0, 00:07:22.959 "w_mbytes_per_sec": 0 00:07:22.959 }, 00:07:22.959 "claimed": true, 00:07:22.959 "claim_type": "exclusive_write", 00:07:22.959 "zoned": false, 00:07:22.959 "supported_io_types": { 00:07:22.959 "read": true, 00:07:22.959 "write": true, 00:07:22.959 "unmap": true, 00:07:22.959 "flush": true, 00:07:22.959 "reset": true, 00:07:22.959 "nvme_admin": false, 00:07:22.959 "nvme_io": false, 00:07:22.959 "nvme_io_md": false, 00:07:22.959 "write_zeroes": true, 00:07:22.959 "zcopy": true, 00:07:22.959 "get_zone_info": false, 00:07:22.959 "zone_management": false, 00:07:22.959 "zone_append": false, 00:07:22.959 "compare": false, 00:07:22.959 "compare_and_write": false, 00:07:22.959 "abort": true, 00:07:22.959 "seek_hole": false, 00:07:22.959 "seek_data": false, 00:07:22.959 "copy": true, 00:07:22.959 "nvme_iov_md": false 00:07:22.959 }, 00:07:22.959 "memory_domains": [ 00:07:22.959 { 00:07:22.959 "dma_device_id": "system", 00:07:22.959 "dma_device_type": 1 00:07:22.959 }, 00:07:22.959 { 00:07:22.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:22.959 "dma_device_type": 2 00:07:22.959 } 00:07:22.959 ], 00:07:22.959 "driver_specific": {} 00:07:22.959 } 00:07:22.959 ] 00:07:22.959 11:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.959 11:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:22.959 11:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:22.959 11:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:22.959 11:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:22.959 11:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:22.959 11:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:22.959 11:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:22.959 11:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:22.959 11:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:22.959 11:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:22.959 11:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:22.959 11:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:23.217 11:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.217 11:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:23.217 11:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.217 11:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.217 11:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:23.217 "name": "Existed_Raid", 00:07:23.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:23.217 "strip_size_kb": 64, 00:07:23.217 "state": "configuring", 00:07:23.217 "raid_level": "raid0", 00:07:23.217 "superblock": false, 00:07:23.217 "num_base_bdevs": 2, 00:07:23.217 "num_base_bdevs_discovered": 1, 00:07:23.217 "num_base_bdevs_operational": 2, 00:07:23.217 "base_bdevs_list": [ 00:07:23.217 { 00:07:23.217 "name": "BaseBdev1", 00:07:23.217 "uuid": "735c0c56-6f04-4804-a607-400fd1eaac1a", 00:07:23.217 "is_configured": true, 00:07:23.217 "data_offset": 0, 00:07:23.217 "data_size": 65536 00:07:23.217 }, 00:07:23.217 { 00:07:23.217 "name": "BaseBdev2", 00:07:23.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:23.217 "is_configured": false, 00:07:23.217 "data_offset": 0, 00:07:23.217 "data_size": 0 00:07:23.217 } 00:07:23.217 ] 00:07:23.218 }' 00:07:23.218 11:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:23.218 11:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.478 11:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:23.478 11:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.478 11:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.478 [2024-11-19 11:59:26.757526] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:23.478 [2024-11-19 11:59:26.757624] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:23.478 11:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.478 11:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:23.478 11:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.478 11:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.478 [2024-11-19 11:59:26.769524] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:23.478 [2024-11-19 11:59:26.771475] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:23.478 [2024-11-19 11:59:26.771555] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:23.478 11:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.478 11:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:23.478 11:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:23.478 11:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:23.478 11:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:23.478 11:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:23.478 11:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:23.478 11:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:23.478 11:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:23.478 11:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:23.478 11:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:23.478 11:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:23.478 11:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:23.478 11:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:23.478 11:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:23.478 11:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.478 11:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.478 11:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.478 11:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:23.478 "name": "Existed_Raid", 00:07:23.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:23.478 "strip_size_kb": 64, 00:07:23.478 "state": "configuring", 00:07:23.478 "raid_level": "raid0", 00:07:23.478 "superblock": false, 00:07:23.478 "num_base_bdevs": 2, 00:07:23.478 "num_base_bdevs_discovered": 1, 00:07:23.478 "num_base_bdevs_operational": 2, 00:07:23.478 "base_bdevs_list": [ 00:07:23.478 { 00:07:23.478 "name": "BaseBdev1", 00:07:23.478 "uuid": "735c0c56-6f04-4804-a607-400fd1eaac1a", 00:07:23.478 "is_configured": true, 00:07:23.478 "data_offset": 0, 00:07:23.478 "data_size": 65536 00:07:23.478 }, 00:07:23.478 { 00:07:23.478 "name": "BaseBdev2", 00:07:23.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:23.478 "is_configured": false, 00:07:23.478 "data_offset": 0, 00:07:23.478 "data_size": 0 00:07:23.478 } 00:07:23.478 ] 00:07:23.478 }' 00:07:23.478 11:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:23.478 11:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.073 11:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:24.073 11:59:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.073 11:59:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.073 [2024-11-19 11:59:27.247221] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:24.073 [2024-11-19 11:59:27.247345] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:24.073 [2024-11-19 11:59:27.247371] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:24.073 [2024-11-19 11:59:27.247680] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:24.334 BaseBdev2 00:07:24.334 [ 00:07:24.334 { 00:07:24.334 "name": "BaseBdev2", 00:07:24.334 "aliases": [ 00:07:24.334 "eb54b51a-3460-4c07-b517-0b21e5b55edc" 00:07:24.334 ], 00:07:24.334 "product_name": "Malloc disk", 00:07:24.334 "block_size": 512, 00:07:24.334 "num_blocks": 65536, 00:07:24.334 "uuid": "eb54b51a-3460-4c07-b517-0b21e5b55edc", 00:07:24.334 "assigned_rate_limits": { 00:07:24.334 "rw_ios_per_sec": 0, 00:07:24.334 "rw_mbytes_per_sec": 0, 00:07:24.334 "r_mbytes_per_sec": 0, 00:07:24.334 "w_mbytes_per_sec": 0 00:07:24.334 }, 00:07:24.334 "claimed": true, 00:07:24.334 "claim_type": "exclusive_write", 00:07:24.334 "zoned": false, 00:07:24.334 "supported_io_types": { 00:07:24.334 "read": true, 00:07:24.334 "write": true, 00:07:24.334 "unmap": true, 00:07:24.334 "flush": true, 00:07:24.334 "reset": true, 00:07:24.334 "nvme_admin": false, 00:07:24.334 "nvme_io": false, 00:07:24.334 "nvme_io_md": false, 00:07:24.334 "write_zeroes": true, 00:07:24.334 "zcopy": true, 00:07:24.334 "get_zone_info": false, 00:07:24.334 "zone_management": false, 00:07:24.334 "zone_append": false, 00:07:24.334 "compare": false, 00:07:24.334 "compare_and_write": false, 00:07:24.334 "abort": true, 00:07:24.334 "seek_hole": false, 00:07:24.334 "seek_data": false, 00:07:24.334 "copy": true, 00:07:24.334 "nvme_iov_md": false 00:07:24.334 }, 00:07:24.334 "memory_domains": [ 00:07:24.334 { 00:07:24.334 "dma_device_id": "system", 00:07:24.334 "dma_device_type": 1 00:07:24.334 }, 00:07:24.334 { 00:07:24.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:24.334 "dma_device_type": 2 00:07:24.334 } 00:07:24.334 ], 00:07:24.334 "driver_specific": {} 00:07:24.334 } 00:07:24.334 ] 00:07:24.334 [2024-11-19 11:59:27.247896] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:24.334 [2024-11-19 11:59:27.247944] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:24.334 [2024-11-19 11:59:27.248242] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:24.334 11:59:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.334 11:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:24.334 11:59:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:24.334 11:59:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:24.334 11:59:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:24.334 11:59:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:24.334 11:59:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:24.334 11:59:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:24.334 11:59:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.334 11:59:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.334 11:59:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.334 11:59:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:24.334 11:59:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.334 11:59:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.334 11:59:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.334 11:59:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:24.334 11:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:24.334 11:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:24.334 11:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:24.334 11:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:24.334 11:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:24.334 11:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:24.334 11:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:24.334 11:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:24.334 11:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:24.334 11:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:24.334 11:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:24.334 11:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:24.334 11:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:24.334 11:59:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.334 11:59:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.334 11:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:24.334 11:59:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.334 11:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:24.334 "name": "Existed_Raid", 00:07:24.334 "uuid": "41eb4f43-be04-4125-85aa-0eb3bb692a2c", 00:07:24.334 "strip_size_kb": 64, 00:07:24.334 "state": "online", 00:07:24.334 "raid_level": "raid0", 00:07:24.334 "superblock": false, 00:07:24.334 "num_base_bdevs": 2, 00:07:24.334 "num_base_bdevs_discovered": 2, 00:07:24.334 "num_base_bdevs_operational": 2, 00:07:24.334 "base_bdevs_list": [ 00:07:24.334 { 00:07:24.334 "name": "BaseBdev1", 00:07:24.334 "uuid": "735c0c56-6f04-4804-a607-400fd1eaac1a", 00:07:24.334 "is_configured": true, 00:07:24.334 "data_offset": 0, 00:07:24.334 "data_size": 65536 00:07:24.334 }, 00:07:24.334 { 00:07:24.334 "name": "BaseBdev2", 00:07:24.334 "uuid": "eb54b51a-3460-4c07-b517-0b21e5b55edc", 00:07:24.334 "is_configured": true, 00:07:24.334 "data_offset": 0, 00:07:24.334 "data_size": 65536 00:07:24.334 } 00:07:24.334 ] 00:07:24.334 }' 00:07:24.334 11:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:24.334 11:59:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.595 11:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:24.595 11:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:24.595 11:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:24.595 11:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:24.595 11:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:24.595 11:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:24.595 11:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:24.595 11:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:24.595 11:59:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.595 11:59:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.595 [2024-11-19 11:59:27.734720] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:24.595 11:59:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.595 11:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:24.595 "name": "Existed_Raid", 00:07:24.595 "aliases": [ 00:07:24.595 "41eb4f43-be04-4125-85aa-0eb3bb692a2c" 00:07:24.595 ], 00:07:24.595 "product_name": "Raid Volume", 00:07:24.595 "block_size": 512, 00:07:24.595 "num_blocks": 131072, 00:07:24.595 "uuid": "41eb4f43-be04-4125-85aa-0eb3bb692a2c", 00:07:24.595 "assigned_rate_limits": { 00:07:24.595 "rw_ios_per_sec": 0, 00:07:24.595 "rw_mbytes_per_sec": 0, 00:07:24.595 "r_mbytes_per_sec": 0, 00:07:24.595 "w_mbytes_per_sec": 0 00:07:24.595 }, 00:07:24.595 "claimed": false, 00:07:24.595 "zoned": false, 00:07:24.595 "supported_io_types": { 00:07:24.595 "read": true, 00:07:24.595 "write": true, 00:07:24.595 "unmap": true, 00:07:24.595 "flush": true, 00:07:24.595 "reset": true, 00:07:24.595 "nvme_admin": false, 00:07:24.595 "nvme_io": false, 00:07:24.595 "nvme_io_md": false, 00:07:24.595 "write_zeroes": true, 00:07:24.595 "zcopy": false, 00:07:24.595 "get_zone_info": false, 00:07:24.595 "zone_management": false, 00:07:24.595 "zone_append": false, 00:07:24.595 "compare": false, 00:07:24.595 "compare_and_write": false, 00:07:24.595 "abort": false, 00:07:24.595 "seek_hole": false, 00:07:24.595 "seek_data": false, 00:07:24.595 "copy": false, 00:07:24.595 "nvme_iov_md": false 00:07:24.595 }, 00:07:24.595 "memory_domains": [ 00:07:24.595 { 00:07:24.595 "dma_device_id": "system", 00:07:24.595 "dma_device_type": 1 00:07:24.595 }, 00:07:24.595 { 00:07:24.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:24.595 "dma_device_type": 2 00:07:24.595 }, 00:07:24.595 { 00:07:24.595 "dma_device_id": "system", 00:07:24.595 "dma_device_type": 1 00:07:24.595 }, 00:07:24.595 { 00:07:24.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:24.595 "dma_device_type": 2 00:07:24.595 } 00:07:24.595 ], 00:07:24.595 "driver_specific": { 00:07:24.595 "raid": { 00:07:24.595 "uuid": "41eb4f43-be04-4125-85aa-0eb3bb692a2c", 00:07:24.595 "strip_size_kb": 64, 00:07:24.595 "state": "online", 00:07:24.595 "raid_level": "raid0", 00:07:24.595 "superblock": false, 00:07:24.595 "num_base_bdevs": 2, 00:07:24.595 "num_base_bdevs_discovered": 2, 00:07:24.595 "num_base_bdevs_operational": 2, 00:07:24.595 "base_bdevs_list": [ 00:07:24.595 { 00:07:24.595 "name": "BaseBdev1", 00:07:24.595 "uuid": "735c0c56-6f04-4804-a607-400fd1eaac1a", 00:07:24.595 "is_configured": true, 00:07:24.595 "data_offset": 0, 00:07:24.595 "data_size": 65536 00:07:24.595 }, 00:07:24.595 { 00:07:24.595 "name": "BaseBdev2", 00:07:24.595 "uuid": "eb54b51a-3460-4c07-b517-0b21e5b55edc", 00:07:24.595 "is_configured": true, 00:07:24.595 "data_offset": 0, 00:07:24.595 "data_size": 65536 00:07:24.595 } 00:07:24.595 ] 00:07:24.595 } 00:07:24.595 } 00:07:24.595 }' 00:07:24.595 11:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:24.595 11:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:24.595 BaseBdev2' 00:07:24.595 11:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:24.595 11:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:24.595 11:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:24.595 11:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:24.595 11:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:24.595 11:59:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.595 11:59:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.595 11:59:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.595 11:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:24.595 11:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:24.595 11:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:24.595 11:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:24.595 11:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:24.595 11:59:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.595 11:59:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.595 11:59:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.595 11:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:24.595 11:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:24.595 11:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:24.595 11:59:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.595 11:59:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.595 [2024-11-19 11:59:27.950124] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:24.595 [2024-11-19 11:59:27.950198] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:24.596 [2024-11-19 11:59:27.950276] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:24.856 11:59:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.856 11:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:24.856 11:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:24.856 11:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:24.856 11:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:24.856 11:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:24.856 11:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:24.856 11:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:24.856 11:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:24.856 11:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:24.856 11:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:24.856 11:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:24.856 11:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:24.856 11:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:24.856 11:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:24.856 11:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:24.856 11:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:24.856 11:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:24.856 11:59:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.856 11:59:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.856 11:59:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.856 11:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:24.856 "name": "Existed_Raid", 00:07:24.856 "uuid": "41eb4f43-be04-4125-85aa-0eb3bb692a2c", 00:07:24.856 "strip_size_kb": 64, 00:07:24.856 "state": "offline", 00:07:24.856 "raid_level": "raid0", 00:07:24.856 "superblock": false, 00:07:24.856 "num_base_bdevs": 2, 00:07:24.856 "num_base_bdevs_discovered": 1, 00:07:24.856 "num_base_bdevs_operational": 1, 00:07:24.856 "base_bdevs_list": [ 00:07:24.856 { 00:07:24.856 "name": null, 00:07:24.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:24.856 "is_configured": false, 00:07:24.856 "data_offset": 0, 00:07:24.856 "data_size": 65536 00:07:24.856 }, 00:07:24.856 { 00:07:24.856 "name": "BaseBdev2", 00:07:24.856 "uuid": "eb54b51a-3460-4c07-b517-0b21e5b55edc", 00:07:24.856 "is_configured": true, 00:07:24.856 "data_offset": 0, 00:07:24.856 "data_size": 65536 00:07:24.856 } 00:07:24.856 ] 00:07:24.856 }' 00:07:24.856 11:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:24.856 11:59:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.427 11:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:25.427 11:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:25.427 11:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.427 11:59:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.427 11:59:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.427 11:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:25.427 11:59:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.427 11:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:25.427 11:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:25.427 11:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:25.427 11:59:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.427 11:59:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.427 [2024-11-19 11:59:28.570045] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:25.427 [2024-11-19 11:59:28.570152] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:25.427 11:59:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.427 11:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:25.427 11:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:25.427 11:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.427 11:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:25.427 11:59:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.427 11:59:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.427 11:59:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.427 11:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:25.427 11:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:25.427 11:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:25.427 11:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60757 00:07:25.427 11:59:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 60757 ']' 00:07:25.427 11:59:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 60757 00:07:25.427 11:59:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:25.427 11:59:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:25.427 11:59:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60757 00:07:25.427 killing process with pid 60757 00:07:25.427 11:59:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:25.427 11:59:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:25.427 11:59:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60757' 00:07:25.427 11:59:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 60757 00:07:25.427 [2024-11-19 11:59:28.772021] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:25.427 11:59:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 60757 00:07:25.427 [2024-11-19 11:59:28.790544] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:26.842 11:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:26.842 00:07:26.842 real 0m5.363s 00:07:26.842 user 0m7.676s 00:07:26.842 sys 0m0.828s 00:07:26.842 ************************************ 00:07:26.842 END TEST raid_state_function_test 00:07:26.842 ************************************ 00:07:26.842 11:59:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:26.842 11:59:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.842 11:59:30 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:07:26.842 11:59:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:26.842 11:59:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:26.842 11:59:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:26.842 ************************************ 00:07:26.842 START TEST raid_state_function_test_sb 00:07:26.842 ************************************ 00:07:26.842 11:59:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:07:26.842 11:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:26.842 11:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:26.842 11:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:26.842 11:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:26.843 11:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:26.843 11:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:26.843 11:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:26.843 11:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:26.843 11:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:26.843 11:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:26.843 11:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:26.843 11:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:26.843 11:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:26.843 11:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:26.843 11:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:26.843 11:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:26.843 11:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:26.843 11:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:26.843 11:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:26.843 11:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:26.843 11:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:26.843 11:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:26.843 11:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:26.843 11:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61016 00:07:26.843 11:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:26.843 Process raid pid: 61016 00:07:26.843 11:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61016' 00:07:26.843 11:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61016 00:07:26.843 11:59:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 61016 ']' 00:07:26.843 11:59:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.843 11:59:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:26.843 11:59:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.843 11:59:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:26.843 11:59:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.102 [2024-11-19 11:59:30.307434] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:07:27.102 [2024-11-19 11:59:30.307787] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:27.361 [2024-11-19 11:59:30.498109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.361 [2024-11-19 11:59:30.660914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.620 [2024-11-19 11:59:30.936083] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:27.620 [2024-11-19 11:59:30.936155] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:27.879 11:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:27.879 11:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:27.879 11:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:27.879 11:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.879 11:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.879 [2024-11-19 11:59:31.234755] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:27.879 [2024-11-19 11:59:31.234935] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:27.879 [2024-11-19 11:59:31.234971] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:27.879 [2024-11-19 11:59:31.235009] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:27.879 11:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.879 11:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:27.879 11:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:27.879 11:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:27.879 11:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:27.879 11:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:27.879 11:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:27.879 11:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:27.879 11:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:27.879 11:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:27.879 11:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:27.879 11:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.879 11:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:27.879 11:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.879 11:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.137 11:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.137 11:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:28.137 "name": "Existed_Raid", 00:07:28.137 "uuid": "b1a91c22-216c-4cef-92c7-4b28b381ace6", 00:07:28.137 "strip_size_kb": 64, 00:07:28.137 "state": "configuring", 00:07:28.137 "raid_level": "raid0", 00:07:28.137 "superblock": true, 00:07:28.137 "num_base_bdevs": 2, 00:07:28.137 "num_base_bdevs_discovered": 0, 00:07:28.137 "num_base_bdevs_operational": 2, 00:07:28.137 "base_bdevs_list": [ 00:07:28.137 { 00:07:28.137 "name": "BaseBdev1", 00:07:28.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:28.137 "is_configured": false, 00:07:28.137 "data_offset": 0, 00:07:28.137 "data_size": 0 00:07:28.137 }, 00:07:28.137 { 00:07:28.137 "name": "BaseBdev2", 00:07:28.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:28.137 "is_configured": false, 00:07:28.137 "data_offset": 0, 00:07:28.137 "data_size": 0 00:07:28.137 } 00:07:28.137 ] 00:07:28.137 }' 00:07:28.137 11:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:28.137 11:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.395 11:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:28.395 11:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.396 11:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.396 [2024-11-19 11:59:31.649941] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:28.396 [2024-11-19 11:59:31.650085] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:28.396 11:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.396 11:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:28.396 11:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.396 11:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.396 [2024-11-19 11:59:31.657929] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:28.396 [2024-11-19 11:59:31.657981] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:28.396 [2024-11-19 11:59:31.658002] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:28.396 [2024-11-19 11:59:31.658018] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:28.396 11:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.396 11:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:28.396 11:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.396 11:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.396 [2024-11-19 11:59:31.706680] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:28.396 BaseBdev1 00:07:28.396 11:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.396 11:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:28.396 11:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:28.396 11:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:28.396 11:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:28.396 11:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:28.396 11:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:28.396 11:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:28.396 11:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.396 11:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.396 11:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.396 11:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:28.396 11:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.396 11:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.396 [ 00:07:28.396 { 00:07:28.396 "name": "BaseBdev1", 00:07:28.396 "aliases": [ 00:07:28.396 "cac964ef-a7bc-4f09-bfd0-8030516c3dc7" 00:07:28.396 ], 00:07:28.396 "product_name": "Malloc disk", 00:07:28.396 "block_size": 512, 00:07:28.396 "num_blocks": 65536, 00:07:28.396 "uuid": "cac964ef-a7bc-4f09-bfd0-8030516c3dc7", 00:07:28.396 "assigned_rate_limits": { 00:07:28.396 "rw_ios_per_sec": 0, 00:07:28.396 "rw_mbytes_per_sec": 0, 00:07:28.396 "r_mbytes_per_sec": 0, 00:07:28.396 "w_mbytes_per_sec": 0 00:07:28.396 }, 00:07:28.396 "claimed": true, 00:07:28.396 "claim_type": "exclusive_write", 00:07:28.396 "zoned": false, 00:07:28.396 "supported_io_types": { 00:07:28.396 "read": true, 00:07:28.396 "write": true, 00:07:28.396 "unmap": true, 00:07:28.396 "flush": true, 00:07:28.396 "reset": true, 00:07:28.396 "nvme_admin": false, 00:07:28.396 "nvme_io": false, 00:07:28.396 "nvme_io_md": false, 00:07:28.396 "write_zeroes": true, 00:07:28.396 "zcopy": true, 00:07:28.396 "get_zone_info": false, 00:07:28.396 "zone_management": false, 00:07:28.396 "zone_append": false, 00:07:28.396 "compare": false, 00:07:28.396 "compare_and_write": false, 00:07:28.396 "abort": true, 00:07:28.396 "seek_hole": false, 00:07:28.396 "seek_data": false, 00:07:28.396 "copy": true, 00:07:28.396 "nvme_iov_md": false 00:07:28.396 }, 00:07:28.396 "memory_domains": [ 00:07:28.396 { 00:07:28.396 "dma_device_id": "system", 00:07:28.396 "dma_device_type": 1 00:07:28.396 }, 00:07:28.396 { 00:07:28.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:28.396 "dma_device_type": 2 00:07:28.396 } 00:07:28.396 ], 00:07:28.396 "driver_specific": {} 00:07:28.396 } 00:07:28.396 ] 00:07:28.396 11:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.396 11:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:28.396 11:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:28.396 11:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:28.396 11:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:28.396 11:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:28.396 11:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:28.396 11:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:28.396 11:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:28.396 11:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:28.396 11:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:28.396 11:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:28.396 11:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:28.396 11:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:28.396 11:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.396 11:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.396 11:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.396 11:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:28.396 "name": "Existed_Raid", 00:07:28.396 "uuid": "da0eeb07-6342-4183-a65c-fb44d1be172b", 00:07:28.396 "strip_size_kb": 64, 00:07:28.396 "state": "configuring", 00:07:28.396 "raid_level": "raid0", 00:07:28.396 "superblock": true, 00:07:28.396 "num_base_bdevs": 2, 00:07:28.396 "num_base_bdevs_discovered": 1, 00:07:28.396 "num_base_bdevs_operational": 2, 00:07:28.396 "base_bdevs_list": [ 00:07:28.396 { 00:07:28.396 "name": "BaseBdev1", 00:07:28.396 "uuid": "cac964ef-a7bc-4f09-bfd0-8030516c3dc7", 00:07:28.396 "is_configured": true, 00:07:28.396 "data_offset": 2048, 00:07:28.396 "data_size": 63488 00:07:28.396 }, 00:07:28.396 { 00:07:28.396 "name": "BaseBdev2", 00:07:28.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:28.396 "is_configured": false, 00:07:28.396 "data_offset": 0, 00:07:28.396 "data_size": 0 00:07:28.396 } 00:07:28.396 ] 00:07:28.396 }' 00:07:28.396 11:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:28.396 11:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.962 11:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:28.962 11:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.962 11:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.962 [2024-11-19 11:59:32.138063] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:28.963 [2024-11-19 11:59:32.138239] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:28.963 11:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.963 11:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:28.963 11:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.963 11:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.963 [2024-11-19 11:59:32.150097] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:28.963 [2024-11-19 11:59:32.152332] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:28.963 [2024-11-19 11:59:32.152434] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:28.963 11:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.963 11:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:28.963 11:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:28.963 11:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:28.963 11:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:28.963 11:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:28.963 11:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:28.963 11:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:28.963 11:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:28.963 11:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:28.963 11:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:28.963 11:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:28.963 11:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:28.963 11:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:28.963 11:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:28.963 11:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.963 11:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.963 11:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.963 11:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:28.963 "name": "Existed_Raid", 00:07:28.963 "uuid": "469c8b09-6e7f-4c69-b928-2262be79a141", 00:07:28.963 "strip_size_kb": 64, 00:07:28.963 "state": "configuring", 00:07:28.963 "raid_level": "raid0", 00:07:28.963 "superblock": true, 00:07:28.963 "num_base_bdevs": 2, 00:07:28.963 "num_base_bdevs_discovered": 1, 00:07:28.963 "num_base_bdevs_operational": 2, 00:07:28.963 "base_bdevs_list": [ 00:07:28.963 { 00:07:28.963 "name": "BaseBdev1", 00:07:28.963 "uuid": "cac964ef-a7bc-4f09-bfd0-8030516c3dc7", 00:07:28.963 "is_configured": true, 00:07:28.963 "data_offset": 2048, 00:07:28.963 "data_size": 63488 00:07:28.963 }, 00:07:28.963 { 00:07:28.963 "name": "BaseBdev2", 00:07:28.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:28.963 "is_configured": false, 00:07:28.963 "data_offset": 0, 00:07:28.963 "data_size": 0 00:07:28.963 } 00:07:28.963 ] 00:07:28.963 }' 00:07:28.963 11:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:28.963 11:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.532 11:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:29.532 11:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.532 11:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.532 [2024-11-19 11:59:32.646907] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:29.532 [2024-11-19 11:59:32.647332] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:29.532 [2024-11-19 11:59:32.647385] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:29.532 [2024-11-19 11:59:32.647680] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:29.532 BaseBdev2 00:07:29.532 [2024-11-19 11:59:32.647883] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:29.532 [2024-11-19 11:59:32.647901] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:29.532 [2024-11-19 11:59:32.648056] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:29.533 11:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.533 11:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:29.533 11:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:29.533 11:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:29.533 11:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:29.533 11:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:29.533 11:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:29.533 11:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:29.533 11:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.533 11:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.533 11:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.533 11:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:29.533 11:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.533 11:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.533 [ 00:07:29.533 { 00:07:29.533 "name": "BaseBdev2", 00:07:29.533 "aliases": [ 00:07:29.533 "6507b49a-7a83-4db1-98bc-898134a5f424" 00:07:29.533 ], 00:07:29.533 "product_name": "Malloc disk", 00:07:29.533 "block_size": 512, 00:07:29.533 "num_blocks": 65536, 00:07:29.533 "uuid": "6507b49a-7a83-4db1-98bc-898134a5f424", 00:07:29.533 "assigned_rate_limits": { 00:07:29.533 "rw_ios_per_sec": 0, 00:07:29.533 "rw_mbytes_per_sec": 0, 00:07:29.533 "r_mbytes_per_sec": 0, 00:07:29.533 "w_mbytes_per_sec": 0 00:07:29.533 }, 00:07:29.533 "claimed": true, 00:07:29.533 "claim_type": "exclusive_write", 00:07:29.533 "zoned": false, 00:07:29.533 "supported_io_types": { 00:07:29.533 "read": true, 00:07:29.533 "write": true, 00:07:29.533 "unmap": true, 00:07:29.533 "flush": true, 00:07:29.533 "reset": true, 00:07:29.533 "nvme_admin": false, 00:07:29.533 "nvme_io": false, 00:07:29.533 "nvme_io_md": false, 00:07:29.533 "write_zeroes": true, 00:07:29.533 "zcopy": true, 00:07:29.533 "get_zone_info": false, 00:07:29.533 "zone_management": false, 00:07:29.533 "zone_append": false, 00:07:29.533 "compare": false, 00:07:29.533 "compare_and_write": false, 00:07:29.533 "abort": true, 00:07:29.533 "seek_hole": false, 00:07:29.533 "seek_data": false, 00:07:29.533 "copy": true, 00:07:29.533 "nvme_iov_md": false 00:07:29.533 }, 00:07:29.533 "memory_domains": [ 00:07:29.533 { 00:07:29.533 "dma_device_id": "system", 00:07:29.533 "dma_device_type": 1 00:07:29.533 }, 00:07:29.533 { 00:07:29.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:29.533 "dma_device_type": 2 00:07:29.533 } 00:07:29.533 ], 00:07:29.533 "driver_specific": {} 00:07:29.533 } 00:07:29.533 ] 00:07:29.533 11:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.533 11:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:29.533 11:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:29.533 11:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:29.533 11:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:29.533 11:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:29.533 11:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:29.533 11:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:29.533 11:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:29.533 11:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:29.533 11:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:29.533 11:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:29.533 11:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:29.533 11:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:29.533 11:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.533 11:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.533 11:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.533 11:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:29.533 11:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.533 11:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:29.533 "name": "Existed_Raid", 00:07:29.533 "uuid": "469c8b09-6e7f-4c69-b928-2262be79a141", 00:07:29.533 "strip_size_kb": 64, 00:07:29.533 "state": "online", 00:07:29.533 "raid_level": "raid0", 00:07:29.533 "superblock": true, 00:07:29.533 "num_base_bdevs": 2, 00:07:29.533 "num_base_bdevs_discovered": 2, 00:07:29.533 "num_base_bdevs_operational": 2, 00:07:29.533 "base_bdevs_list": [ 00:07:29.533 { 00:07:29.533 "name": "BaseBdev1", 00:07:29.533 "uuid": "cac964ef-a7bc-4f09-bfd0-8030516c3dc7", 00:07:29.533 "is_configured": true, 00:07:29.533 "data_offset": 2048, 00:07:29.533 "data_size": 63488 00:07:29.533 }, 00:07:29.533 { 00:07:29.533 "name": "BaseBdev2", 00:07:29.533 "uuid": "6507b49a-7a83-4db1-98bc-898134a5f424", 00:07:29.533 "is_configured": true, 00:07:29.533 "data_offset": 2048, 00:07:29.533 "data_size": 63488 00:07:29.533 } 00:07:29.533 ] 00:07:29.533 }' 00:07:29.533 11:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:29.533 11:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.792 11:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:29.792 11:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:29.792 11:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:29.792 11:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:29.792 11:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:29.792 11:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:29.792 11:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:29.792 11:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:29.792 11:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.792 11:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.792 [2024-11-19 11:59:33.110582] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:29.792 11:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.792 11:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:29.792 "name": "Existed_Raid", 00:07:29.792 "aliases": [ 00:07:29.792 "469c8b09-6e7f-4c69-b928-2262be79a141" 00:07:29.792 ], 00:07:29.792 "product_name": "Raid Volume", 00:07:29.792 "block_size": 512, 00:07:29.792 "num_blocks": 126976, 00:07:29.792 "uuid": "469c8b09-6e7f-4c69-b928-2262be79a141", 00:07:29.792 "assigned_rate_limits": { 00:07:29.792 "rw_ios_per_sec": 0, 00:07:29.792 "rw_mbytes_per_sec": 0, 00:07:29.792 "r_mbytes_per_sec": 0, 00:07:29.792 "w_mbytes_per_sec": 0 00:07:29.792 }, 00:07:29.792 "claimed": false, 00:07:29.792 "zoned": false, 00:07:29.792 "supported_io_types": { 00:07:29.792 "read": true, 00:07:29.792 "write": true, 00:07:29.792 "unmap": true, 00:07:29.792 "flush": true, 00:07:29.792 "reset": true, 00:07:29.792 "nvme_admin": false, 00:07:29.792 "nvme_io": false, 00:07:29.792 "nvme_io_md": false, 00:07:29.792 "write_zeroes": true, 00:07:29.792 "zcopy": false, 00:07:29.792 "get_zone_info": false, 00:07:29.792 "zone_management": false, 00:07:29.792 "zone_append": false, 00:07:29.792 "compare": false, 00:07:29.792 "compare_and_write": false, 00:07:29.792 "abort": false, 00:07:29.792 "seek_hole": false, 00:07:29.792 "seek_data": false, 00:07:29.792 "copy": false, 00:07:29.792 "nvme_iov_md": false 00:07:29.792 }, 00:07:29.792 "memory_domains": [ 00:07:29.792 { 00:07:29.792 "dma_device_id": "system", 00:07:29.792 "dma_device_type": 1 00:07:29.792 }, 00:07:29.792 { 00:07:29.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:29.792 "dma_device_type": 2 00:07:29.792 }, 00:07:29.792 { 00:07:29.792 "dma_device_id": "system", 00:07:29.792 "dma_device_type": 1 00:07:29.792 }, 00:07:29.792 { 00:07:29.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:29.792 "dma_device_type": 2 00:07:29.792 } 00:07:29.792 ], 00:07:29.792 "driver_specific": { 00:07:29.792 "raid": { 00:07:29.792 "uuid": "469c8b09-6e7f-4c69-b928-2262be79a141", 00:07:29.792 "strip_size_kb": 64, 00:07:29.792 "state": "online", 00:07:29.792 "raid_level": "raid0", 00:07:29.792 "superblock": true, 00:07:29.792 "num_base_bdevs": 2, 00:07:29.792 "num_base_bdevs_discovered": 2, 00:07:29.792 "num_base_bdevs_operational": 2, 00:07:29.792 "base_bdevs_list": [ 00:07:29.792 { 00:07:29.792 "name": "BaseBdev1", 00:07:29.792 "uuid": "cac964ef-a7bc-4f09-bfd0-8030516c3dc7", 00:07:29.792 "is_configured": true, 00:07:29.792 "data_offset": 2048, 00:07:29.793 "data_size": 63488 00:07:29.793 }, 00:07:29.793 { 00:07:29.793 "name": "BaseBdev2", 00:07:29.793 "uuid": "6507b49a-7a83-4db1-98bc-898134a5f424", 00:07:29.793 "is_configured": true, 00:07:29.793 "data_offset": 2048, 00:07:29.793 "data_size": 63488 00:07:29.793 } 00:07:29.793 ] 00:07:29.793 } 00:07:29.793 } 00:07:29.793 }' 00:07:29.793 11:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:30.051 11:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:30.051 BaseBdev2' 00:07:30.051 11:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:30.051 11:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:30.051 11:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:30.051 11:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:30.051 11:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.051 11:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.051 11:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:30.051 11:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.051 11:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:30.051 11:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:30.051 11:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:30.051 11:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:30.051 11:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.051 11:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.051 11:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:30.051 11:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.051 11:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:30.051 11:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:30.051 11:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:30.051 11:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.051 11:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.051 [2024-11-19 11:59:33.357961] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:30.051 [2024-11-19 11:59:33.358114] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:30.051 [2024-11-19 11:59:33.358224] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:30.310 11:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.310 11:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:30.310 11:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:30.310 11:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:30.310 11:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:30.310 11:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:30.310 11:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:30.310 11:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:30.310 11:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:30.310 11:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:30.310 11:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:30.310 11:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:30.310 11:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:30.310 11:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:30.310 11:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:30.310 11:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:30.310 11:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:30.310 11:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.310 11:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.310 11:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:30.310 11:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.310 11:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:30.310 "name": "Existed_Raid", 00:07:30.310 "uuid": "469c8b09-6e7f-4c69-b928-2262be79a141", 00:07:30.310 "strip_size_kb": 64, 00:07:30.310 "state": "offline", 00:07:30.310 "raid_level": "raid0", 00:07:30.310 "superblock": true, 00:07:30.310 "num_base_bdevs": 2, 00:07:30.310 "num_base_bdevs_discovered": 1, 00:07:30.310 "num_base_bdevs_operational": 1, 00:07:30.310 "base_bdevs_list": [ 00:07:30.310 { 00:07:30.310 "name": null, 00:07:30.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:30.310 "is_configured": false, 00:07:30.310 "data_offset": 0, 00:07:30.310 "data_size": 63488 00:07:30.310 }, 00:07:30.310 { 00:07:30.310 "name": "BaseBdev2", 00:07:30.310 "uuid": "6507b49a-7a83-4db1-98bc-898134a5f424", 00:07:30.310 "is_configured": true, 00:07:30.310 "data_offset": 2048, 00:07:30.310 "data_size": 63488 00:07:30.310 } 00:07:30.310 ] 00:07:30.310 }' 00:07:30.310 11:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:30.310 11:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.570 11:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:30.570 11:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:30.570 11:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:30.570 11:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:30.570 11:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.570 11:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.570 11:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.570 11:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:30.570 11:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:30.570 11:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:30.570 11:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.570 11:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.830 [2024-11-19 11:59:33.948298] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:30.830 [2024-11-19 11:59:33.948418] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:30.830 11:59:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.830 11:59:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:30.830 11:59:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:30.830 11:59:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:30.830 11:59:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:30.830 11:59:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.831 11:59:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.831 11:59:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.831 11:59:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:30.831 11:59:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:30.831 11:59:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:30.831 11:59:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61016 00:07:30.831 11:59:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 61016 ']' 00:07:30.831 11:59:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 61016 00:07:30.831 11:59:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:30.831 11:59:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:30.831 11:59:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61016 00:07:30.831 11:59:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:30.831 11:59:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:30.831 11:59:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61016' 00:07:30.831 killing process with pid 61016 00:07:30.831 11:59:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 61016 00:07:30.831 [2024-11-19 11:59:34.138227] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:30.831 11:59:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 61016 00:07:30.831 [2024-11-19 11:59:34.155040] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:32.232 11:59:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:32.232 00:07:32.232 real 0m5.085s 00:07:32.232 user 0m7.241s 00:07:32.232 sys 0m0.912s 00:07:32.232 11:59:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:32.232 11:59:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.232 ************************************ 00:07:32.232 END TEST raid_state_function_test_sb 00:07:32.232 ************************************ 00:07:32.232 11:59:35 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:07:32.232 11:59:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:32.232 11:59:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:32.232 11:59:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:32.232 ************************************ 00:07:32.232 START TEST raid_superblock_test 00:07:32.232 ************************************ 00:07:32.232 11:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:07:32.232 11:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:07:32.232 11:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:32.232 11:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:32.232 11:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:32.232 11:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:32.232 11:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:32.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.232 11:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:32.232 11:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:32.232 11:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:32.232 11:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:32.232 11:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:32.232 11:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:32.232 11:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:32.232 11:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:07:32.232 11:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:32.232 11:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:32.232 11:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61268 00:07:32.232 11:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61268 00:07:32.232 11:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61268 ']' 00:07:32.232 11:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.232 11:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:32.232 11:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.232 11:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:32.232 11:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.232 11:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:32.232 [2024-11-19 11:59:35.421977] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:07:32.232 [2024-11-19 11:59:35.422203] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61268 ] 00:07:32.232 [2024-11-19 11:59:35.597094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.491 [2024-11-19 11:59:35.717808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.751 [2024-11-19 11:59:35.918163] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:32.751 [2024-11-19 11:59:35.918288] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:33.012 11:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:33.012 11:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:33.012 11:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:33.012 11:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:33.012 11:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:33.012 11:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:33.012 11:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:33.012 11:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:33.012 11:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:33.012 11:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:33.012 11:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:33.012 11:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.012 11:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.012 malloc1 00:07:33.012 11:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.012 11:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:33.012 11:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.012 11:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.012 [2024-11-19 11:59:36.288821] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:33.012 [2024-11-19 11:59:36.288956] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:33.012 [2024-11-19 11:59:36.289007] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:33.012 [2024-11-19 11:59:36.289050] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:33.012 [2024-11-19 11:59:36.291169] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:33.012 [2024-11-19 11:59:36.291240] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:33.012 pt1 00:07:33.012 11:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.012 11:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:33.012 11:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:33.012 11:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:33.012 11:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:33.012 11:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:33.012 11:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:33.012 11:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:33.012 11:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:33.012 11:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:33.012 11:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.012 11:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.012 malloc2 00:07:33.012 11:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.012 11:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:33.012 11:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.012 11:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.012 [2024-11-19 11:59:36.344902] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:33.012 [2024-11-19 11:59:36.345074] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:33.012 [2024-11-19 11:59:36.345119] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:33.012 [2024-11-19 11:59:36.345159] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:33.012 [2024-11-19 11:59:36.347351] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:33.012 [2024-11-19 11:59:36.347424] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:33.012 pt2 00:07:33.012 11:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.012 11:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:33.012 11:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:33.012 11:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:33.012 11:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.012 11:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.012 [2024-11-19 11:59:36.356923] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:33.012 [2024-11-19 11:59:36.358719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:33.012 [2024-11-19 11:59:36.358917] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:33.012 [2024-11-19 11:59:36.358979] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:33.012 [2024-11-19 11:59:36.359263] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:33.012 [2024-11-19 11:59:36.359452] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:33.012 [2024-11-19 11:59:36.359495] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:33.012 [2024-11-19 11:59:36.359695] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:33.012 11:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.013 11:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:33.013 11:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:33.013 11:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:33.013 11:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:33.013 11:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:33.013 11:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:33.013 11:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:33.013 11:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:33.013 11:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:33.013 11:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:33.013 11:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.013 11:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:33.013 11:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.013 11:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.013 11:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.272 11:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:33.272 "name": "raid_bdev1", 00:07:33.272 "uuid": "b81221ba-c63c-4df1-b292-98c2f735996e", 00:07:33.272 "strip_size_kb": 64, 00:07:33.272 "state": "online", 00:07:33.272 "raid_level": "raid0", 00:07:33.272 "superblock": true, 00:07:33.272 "num_base_bdevs": 2, 00:07:33.272 "num_base_bdevs_discovered": 2, 00:07:33.272 "num_base_bdevs_operational": 2, 00:07:33.272 "base_bdevs_list": [ 00:07:33.272 { 00:07:33.272 "name": "pt1", 00:07:33.272 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:33.272 "is_configured": true, 00:07:33.272 "data_offset": 2048, 00:07:33.272 "data_size": 63488 00:07:33.272 }, 00:07:33.272 { 00:07:33.272 "name": "pt2", 00:07:33.272 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:33.272 "is_configured": true, 00:07:33.272 "data_offset": 2048, 00:07:33.272 "data_size": 63488 00:07:33.272 } 00:07:33.272 ] 00:07:33.272 }' 00:07:33.272 11:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:33.272 11:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.533 11:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:33.533 11:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:33.533 11:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:33.533 11:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:33.533 11:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:33.533 11:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:33.533 11:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:33.533 11:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:33.533 11:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.533 11:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.533 [2024-11-19 11:59:36.812432] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:33.533 11:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.533 11:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:33.533 "name": "raid_bdev1", 00:07:33.533 "aliases": [ 00:07:33.533 "b81221ba-c63c-4df1-b292-98c2f735996e" 00:07:33.533 ], 00:07:33.533 "product_name": "Raid Volume", 00:07:33.533 "block_size": 512, 00:07:33.533 "num_blocks": 126976, 00:07:33.533 "uuid": "b81221ba-c63c-4df1-b292-98c2f735996e", 00:07:33.533 "assigned_rate_limits": { 00:07:33.533 "rw_ios_per_sec": 0, 00:07:33.533 "rw_mbytes_per_sec": 0, 00:07:33.533 "r_mbytes_per_sec": 0, 00:07:33.533 "w_mbytes_per_sec": 0 00:07:33.533 }, 00:07:33.533 "claimed": false, 00:07:33.533 "zoned": false, 00:07:33.533 "supported_io_types": { 00:07:33.533 "read": true, 00:07:33.533 "write": true, 00:07:33.533 "unmap": true, 00:07:33.533 "flush": true, 00:07:33.533 "reset": true, 00:07:33.533 "nvme_admin": false, 00:07:33.533 "nvme_io": false, 00:07:33.533 "nvme_io_md": false, 00:07:33.533 "write_zeroes": true, 00:07:33.533 "zcopy": false, 00:07:33.533 "get_zone_info": false, 00:07:33.533 "zone_management": false, 00:07:33.533 "zone_append": false, 00:07:33.533 "compare": false, 00:07:33.533 "compare_and_write": false, 00:07:33.533 "abort": false, 00:07:33.533 "seek_hole": false, 00:07:33.533 "seek_data": false, 00:07:33.533 "copy": false, 00:07:33.533 "nvme_iov_md": false 00:07:33.533 }, 00:07:33.533 "memory_domains": [ 00:07:33.533 { 00:07:33.533 "dma_device_id": "system", 00:07:33.533 "dma_device_type": 1 00:07:33.533 }, 00:07:33.533 { 00:07:33.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:33.533 "dma_device_type": 2 00:07:33.533 }, 00:07:33.533 { 00:07:33.533 "dma_device_id": "system", 00:07:33.533 "dma_device_type": 1 00:07:33.533 }, 00:07:33.533 { 00:07:33.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:33.533 "dma_device_type": 2 00:07:33.533 } 00:07:33.533 ], 00:07:33.533 "driver_specific": { 00:07:33.533 "raid": { 00:07:33.533 "uuid": "b81221ba-c63c-4df1-b292-98c2f735996e", 00:07:33.533 "strip_size_kb": 64, 00:07:33.533 "state": "online", 00:07:33.533 "raid_level": "raid0", 00:07:33.533 "superblock": true, 00:07:33.533 "num_base_bdevs": 2, 00:07:33.533 "num_base_bdevs_discovered": 2, 00:07:33.533 "num_base_bdevs_operational": 2, 00:07:33.533 "base_bdevs_list": [ 00:07:33.533 { 00:07:33.533 "name": "pt1", 00:07:33.533 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:33.533 "is_configured": true, 00:07:33.533 "data_offset": 2048, 00:07:33.533 "data_size": 63488 00:07:33.533 }, 00:07:33.533 { 00:07:33.533 "name": "pt2", 00:07:33.533 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:33.533 "is_configured": true, 00:07:33.533 "data_offset": 2048, 00:07:33.533 "data_size": 63488 00:07:33.533 } 00:07:33.533 ] 00:07:33.533 } 00:07:33.533 } 00:07:33.533 }' 00:07:33.533 11:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:33.533 11:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:33.533 pt2' 00:07:33.533 11:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:33.794 11:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:33.794 11:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:33.794 11:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:33.794 11:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:33.794 11:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.794 11:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.794 11:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.794 11:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:33.794 11:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:33.794 11:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:33.794 11:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:33.794 11:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:33.794 11:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.794 11:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.794 11:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.794 11:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:33.794 11:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:33.794 11:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:33.794 11:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:33.794 11:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.794 11:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.794 [2024-11-19 11:59:37.035948] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:33.794 11:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.794 11:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b81221ba-c63c-4df1-b292-98c2f735996e 00:07:33.794 11:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z b81221ba-c63c-4df1-b292-98c2f735996e ']' 00:07:33.794 11:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:33.794 11:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.794 11:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.794 [2024-11-19 11:59:37.079614] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:33.794 [2024-11-19 11:59:37.079680] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:33.794 [2024-11-19 11:59:37.079790] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:33.794 [2024-11-19 11:59:37.079852] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:33.794 [2024-11-19 11:59:37.079890] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:33.794 11:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.794 11:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.794 11:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:33.794 11:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.794 11:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.794 11:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.794 11:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:33.794 11:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:33.794 11:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:33.794 11:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:33.794 11:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.794 11:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.794 11:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.794 11:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:33.794 11:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:33.794 11:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.794 11:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.794 11:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.794 11:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:33.794 11:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:33.794 11:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.794 11:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.055 11:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.055 11:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:34.055 11:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:34.055 11:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:34.055 11:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:34.055 11:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:34.055 11:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:34.055 11:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:34.055 11:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:34.055 11:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:34.055 11:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.055 11:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.055 [2024-11-19 11:59:37.223412] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:34.055 [2024-11-19 11:59:37.225284] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:34.055 [2024-11-19 11:59:37.225346] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:34.055 [2024-11-19 11:59:37.225395] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:34.055 [2024-11-19 11:59:37.225410] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:34.055 [2024-11-19 11:59:37.225422] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:34.055 request: 00:07:34.055 { 00:07:34.055 "name": "raid_bdev1", 00:07:34.055 "raid_level": "raid0", 00:07:34.055 "base_bdevs": [ 00:07:34.055 "malloc1", 00:07:34.055 "malloc2" 00:07:34.055 ], 00:07:34.055 "strip_size_kb": 64, 00:07:34.055 "superblock": false, 00:07:34.055 "method": "bdev_raid_create", 00:07:34.055 "req_id": 1 00:07:34.055 } 00:07:34.055 Got JSON-RPC error response 00:07:34.055 response: 00:07:34.055 { 00:07:34.055 "code": -17, 00:07:34.055 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:34.055 } 00:07:34.055 11:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:34.055 11:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:34.055 11:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:34.055 11:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:34.055 11:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:34.055 11:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:34.055 11:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.055 11:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:34.055 11:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.055 11:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.055 11:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:34.055 11:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:34.055 11:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:34.055 11:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.055 11:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.055 [2024-11-19 11:59:37.287277] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:34.055 [2024-11-19 11:59:37.287373] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:34.055 [2024-11-19 11:59:37.287427] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:34.055 [2024-11-19 11:59:37.287459] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:34.055 [2024-11-19 11:59:37.289571] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:34.055 [2024-11-19 11:59:37.289641] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:34.055 [2024-11-19 11:59:37.289736] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:34.055 [2024-11-19 11:59:37.289840] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:34.055 pt1 00:07:34.055 11:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.055 11:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:07:34.055 11:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:34.055 11:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:34.055 11:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:34.055 11:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:34.055 11:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:34.055 11:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:34.055 11:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:34.055 11:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:34.055 11:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:34.055 11:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:34.055 11:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:34.055 11:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.055 11:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.055 11:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.055 11:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:34.055 "name": "raid_bdev1", 00:07:34.055 "uuid": "b81221ba-c63c-4df1-b292-98c2f735996e", 00:07:34.055 "strip_size_kb": 64, 00:07:34.055 "state": "configuring", 00:07:34.055 "raid_level": "raid0", 00:07:34.056 "superblock": true, 00:07:34.056 "num_base_bdevs": 2, 00:07:34.056 "num_base_bdevs_discovered": 1, 00:07:34.056 "num_base_bdevs_operational": 2, 00:07:34.056 "base_bdevs_list": [ 00:07:34.056 { 00:07:34.056 "name": "pt1", 00:07:34.056 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:34.056 "is_configured": true, 00:07:34.056 "data_offset": 2048, 00:07:34.056 "data_size": 63488 00:07:34.056 }, 00:07:34.056 { 00:07:34.056 "name": null, 00:07:34.056 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:34.056 "is_configured": false, 00:07:34.056 "data_offset": 2048, 00:07:34.056 "data_size": 63488 00:07:34.056 } 00:07:34.056 ] 00:07:34.056 }' 00:07:34.056 11:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:34.056 11:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.623 11:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:34.623 11:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:34.623 11:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:34.623 11:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:34.623 11:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.623 11:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.623 [2024-11-19 11:59:37.726603] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:34.623 [2024-11-19 11:59:37.726713] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:34.623 [2024-11-19 11:59:37.726734] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:34.623 [2024-11-19 11:59:37.726745] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:34.623 [2024-11-19 11:59:37.727235] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:34.623 [2024-11-19 11:59:37.727270] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:34.623 [2024-11-19 11:59:37.727358] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:34.623 [2024-11-19 11:59:37.727381] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:34.623 [2024-11-19 11:59:37.727487] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:34.623 [2024-11-19 11:59:37.727499] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:34.623 [2024-11-19 11:59:37.727734] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:34.623 [2024-11-19 11:59:37.727871] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:34.623 [2024-11-19 11:59:37.727880] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:34.623 [2024-11-19 11:59:37.728029] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:34.623 pt2 00:07:34.623 11:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.623 11:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:34.623 11:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:34.623 11:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:34.623 11:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:34.623 11:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:34.623 11:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:34.623 11:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:34.623 11:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:34.623 11:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:34.623 11:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:34.623 11:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:34.623 11:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:34.623 11:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:34.623 11:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.623 11:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.623 11:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:34.623 11:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.623 11:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:34.623 "name": "raid_bdev1", 00:07:34.623 "uuid": "b81221ba-c63c-4df1-b292-98c2f735996e", 00:07:34.623 "strip_size_kb": 64, 00:07:34.623 "state": "online", 00:07:34.623 "raid_level": "raid0", 00:07:34.623 "superblock": true, 00:07:34.623 "num_base_bdevs": 2, 00:07:34.623 "num_base_bdevs_discovered": 2, 00:07:34.623 "num_base_bdevs_operational": 2, 00:07:34.623 "base_bdevs_list": [ 00:07:34.623 { 00:07:34.623 "name": "pt1", 00:07:34.623 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:34.623 "is_configured": true, 00:07:34.623 "data_offset": 2048, 00:07:34.623 "data_size": 63488 00:07:34.623 }, 00:07:34.623 { 00:07:34.623 "name": "pt2", 00:07:34.623 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:34.623 "is_configured": true, 00:07:34.623 "data_offset": 2048, 00:07:34.623 "data_size": 63488 00:07:34.623 } 00:07:34.623 ] 00:07:34.623 }' 00:07:34.623 11:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:34.623 11:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.882 11:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:34.882 11:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:34.882 11:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:34.882 11:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:34.882 11:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:34.882 11:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:34.882 11:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:34.882 11:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:34.882 11:59:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.882 11:59:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.882 [2024-11-19 11:59:38.198146] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:34.882 11:59:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.882 11:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:34.882 "name": "raid_bdev1", 00:07:34.882 "aliases": [ 00:07:34.882 "b81221ba-c63c-4df1-b292-98c2f735996e" 00:07:34.882 ], 00:07:34.882 "product_name": "Raid Volume", 00:07:34.882 "block_size": 512, 00:07:34.882 "num_blocks": 126976, 00:07:34.882 "uuid": "b81221ba-c63c-4df1-b292-98c2f735996e", 00:07:34.882 "assigned_rate_limits": { 00:07:34.882 "rw_ios_per_sec": 0, 00:07:34.882 "rw_mbytes_per_sec": 0, 00:07:34.882 "r_mbytes_per_sec": 0, 00:07:34.882 "w_mbytes_per_sec": 0 00:07:34.882 }, 00:07:34.882 "claimed": false, 00:07:34.882 "zoned": false, 00:07:34.882 "supported_io_types": { 00:07:34.882 "read": true, 00:07:34.882 "write": true, 00:07:34.882 "unmap": true, 00:07:34.882 "flush": true, 00:07:34.882 "reset": true, 00:07:34.882 "nvme_admin": false, 00:07:34.882 "nvme_io": false, 00:07:34.882 "nvme_io_md": false, 00:07:34.882 "write_zeroes": true, 00:07:34.882 "zcopy": false, 00:07:34.882 "get_zone_info": false, 00:07:34.882 "zone_management": false, 00:07:34.882 "zone_append": false, 00:07:34.882 "compare": false, 00:07:34.882 "compare_and_write": false, 00:07:34.882 "abort": false, 00:07:34.882 "seek_hole": false, 00:07:34.882 "seek_data": false, 00:07:34.882 "copy": false, 00:07:34.882 "nvme_iov_md": false 00:07:34.882 }, 00:07:34.882 "memory_domains": [ 00:07:34.882 { 00:07:34.882 "dma_device_id": "system", 00:07:34.882 "dma_device_type": 1 00:07:34.882 }, 00:07:34.882 { 00:07:34.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:34.882 "dma_device_type": 2 00:07:34.882 }, 00:07:34.882 { 00:07:34.882 "dma_device_id": "system", 00:07:34.882 "dma_device_type": 1 00:07:34.882 }, 00:07:34.882 { 00:07:34.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:34.882 "dma_device_type": 2 00:07:34.882 } 00:07:34.882 ], 00:07:34.882 "driver_specific": { 00:07:34.882 "raid": { 00:07:34.882 "uuid": "b81221ba-c63c-4df1-b292-98c2f735996e", 00:07:34.882 "strip_size_kb": 64, 00:07:34.882 "state": "online", 00:07:34.882 "raid_level": "raid0", 00:07:34.882 "superblock": true, 00:07:34.882 "num_base_bdevs": 2, 00:07:34.882 "num_base_bdevs_discovered": 2, 00:07:34.882 "num_base_bdevs_operational": 2, 00:07:34.882 "base_bdevs_list": [ 00:07:34.882 { 00:07:34.882 "name": "pt1", 00:07:34.882 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:34.882 "is_configured": true, 00:07:34.882 "data_offset": 2048, 00:07:34.882 "data_size": 63488 00:07:34.882 }, 00:07:34.882 { 00:07:34.882 "name": "pt2", 00:07:34.882 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:34.882 "is_configured": true, 00:07:34.882 "data_offset": 2048, 00:07:34.882 "data_size": 63488 00:07:34.882 } 00:07:34.882 ] 00:07:34.882 } 00:07:34.882 } 00:07:34.882 }' 00:07:34.882 11:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:35.140 11:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:35.140 pt2' 00:07:35.140 11:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:35.140 11:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:35.140 11:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:35.140 11:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:35.140 11:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:35.140 11:59:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.140 11:59:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.140 11:59:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.140 11:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:35.140 11:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:35.140 11:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:35.140 11:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:35.140 11:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:35.140 11:59:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.140 11:59:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.141 11:59:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.141 11:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:35.141 11:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:35.141 11:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:35.141 11:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:35.141 11:59:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.141 11:59:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.141 [2024-11-19 11:59:38.401797] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:35.141 11:59:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.141 11:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' b81221ba-c63c-4df1-b292-98c2f735996e '!=' b81221ba-c63c-4df1-b292-98c2f735996e ']' 00:07:35.141 11:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:07:35.141 11:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:35.141 11:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:35.141 11:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61268 00:07:35.141 11:59:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61268 ']' 00:07:35.141 11:59:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61268 00:07:35.141 11:59:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:35.141 11:59:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:35.141 11:59:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61268 00:07:35.141 killing process with pid 61268 00:07:35.141 11:59:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:35.141 11:59:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:35.141 11:59:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61268' 00:07:35.141 11:59:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 61268 00:07:35.141 11:59:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 61268 00:07:35.141 [2024-11-19 11:59:38.458777] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:35.141 [2024-11-19 11:59:38.458904] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:35.141 [2024-11-19 11:59:38.458980] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:35.141 [2024-11-19 11:59:38.459015] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:35.399 [2024-11-19 11:59:38.672964] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:36.779 11:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:36.779 00:07:36.779 real 0m4.465s 00:07:36.779 user 0m6.254s 00:07:36.779 sys 0m0.729s 00:07:36.779 11:59:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:36.779 ************************************ 00:07:36.779 END TEST raid_superblock_test 00:07:36.779 ************************************ 00:07:36.779 11:59:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.779 11:59:39 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:07:36.779 11:59:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:36.779 11:59:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.779 11:59:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:36.779 ************************************ 00:07:36.779 START TEST raid_read_error_test 00:07:36.779 ************************************ 00:07:36.779 11:59:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:07:36.779 11:59:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:36.779 11:59:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:36.779 11:59:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:36.779 11:59:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:36.779 11:59:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:36.779 11:59:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:36.779 11:59:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:36.779 11:59:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:36.779 11:59:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:36.779 11:59:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:36.779 11:59:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:36.779 11:59:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:36.779 11:59:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:36.779 11:59:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:36.779 11:59:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:36.779 11:59:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:36.779 11:59:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:36.779 11:59:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:36.779 11:59:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:36.779 11:59:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:36.779 11:59:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:36.779 11:59:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:36.779 11:59:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.tgkCwF5FRw 00:07:36.779 11:59:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61474 00:07:36.779 11:59:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:36.779 11:59:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61474 00:07:36.779 11:59:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 61474 ']' 00:07:36.779 11:59:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.779 11:59:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:36.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.779 11:59:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.779 11:59:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:36.779 11:59:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.779 [2024-11-19 11:59:39.988524] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:07:36.779 [2024-11-19 11:59:39.988689] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61474 ] 00:07:37.059 [2024-11-19 11:59:40.171696] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.059 [2024-11-19 11:59:40.287933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.318 [2024-11-19 11:59:40.488761] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:37.318 [2024-11-19 11:59:40.488897] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:37.578 11:59:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:37.578 11:59:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:37.578 11:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:37.578 11:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:37.578 11:59:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.578 11:59:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.578 BaseBdev1_malloc 00:07:37.578 11:59:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.578 11:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:37.578 11:59:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.578 11:59:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.578 true 00:07:37.578 11:59:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.578 11:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:37.578 11:59:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.578 11:59:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.578 [2024-11-19 11:59:40.896608] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:37.578 [2024-11-19 11:59:40.896779] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:37.578 [2024-11-19 11:59:40.896826] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:37.578 [2024-11-19 11:59:40.896864] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:37.578 [2024-11-19 11:59:40.898965] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:37.578 [2024-11-19 11:59:40.899071] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:37.578 BaseBdev1 00:07:37.578 11:59:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.578 11:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:37.578 11:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:37.578 11:59:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.578 11:59:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.578 BaseBdev2_malloc 00:07:37.578 11:59:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.578 11:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:37.578 11:59:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.578 11:59:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.837 true 00:07:37.837 11:59:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.837 11:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:37.837 11:59:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.837 11:59:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.837 [2024-11-19 11:59:40.962558] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:37.837 [2024-11-19 11:59:40.962701] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:37.837 [2024-11-19 11:59:40.962735] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:37.837 [2024-11-19 11:59:40.962768] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:37.837 [2024-11-19 11:59:40.965043] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:37.837 [2024-11-19 11:59:40.965144] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:37.837 BaseBdev2 00:07:37.837 11:59:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.837 11:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:37.837 11:59:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.837 11:59:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.837 [2024-11-19 11:59:40.974591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:37.838 [2024-11-19 11:59:40.976404] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:37.838 [2024-11-19 11:59:40.976629] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:37.838 [2024-11-19 11:59:40.976680] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:37.838 [2024-11-19 11:59:40.976925] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:37.838 [2024-11-19 11:59:40.977144] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:37.838 [2024-11-19 11:59:40.977192] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:37.838 [2024-11-19 11:59:40.977392] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:37.838 11:59:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.838 11:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:37.838 11:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:37.838 11:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:37.838 11:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:37.838 11:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:37.838 11:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:37.838 11:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:37.838 11:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:37.838 11:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:37.838 11:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:37.838 11:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.838 11:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:37.838 11:59:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.838 11:59:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.838 11:59:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.838 11:59:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:37.838 "name": "raid_bdev1", 00:07:37.838 "uuid": "eaa8c854-ebc9-4780-a84b-368ee3521cd1", 00:07:37.838 "strip_size_kb": 64, 00:07:37.838 "state": "online", 00:07:37.838 "raid_level": "raid0", 00:07:37.838 "superblock": true, 00:07:37.838 "num_base_bdevs": 2, 00:07:37.838 "num_base_bdevs_discovered": 2, 00:07:37.838 "num_base_bdevs_operational": 2, 00:07:37.838 "base_bdevs_list": [ 00:07:37.838 { 00:07:37.838 "name": "BaseBdev1", 00:07:37.838 "uuid": "820db0fb-8aca-596e-9076-a08ec9cd3ae3", 00:07:37.838 "is_configured": true, 00:07:37.838 "data_offset": 2048, 00:07:37.838 "data_size": 63488 00:07:37.838 }, 00:07:37.838 { 00:07:37.838 "name": "BaseBdev2", 00:07:37.838 "uuid": "5d25afa8-e091-50ac-a898-e638087c9e93", 00:07:37.838 "is_configured": true, 00:07:37.838 "data_offset": 2048, 00:07:37.838 "data_size": 63488 00:07:37.838 } 00:07:37.838 ] 00:07:37.838 }' 00:07:37.838 11:59:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:37.838 11:59:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.097 11:59:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:38.097 11:59:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:38.355 [2024-11-19 11:59:41.503115] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:39.293 11:59:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:39.293 11:59:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.293 11:59:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.293 11:59:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.293 11:59:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:39.293 11:59:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:39.293 11:59:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:39.293 11:59:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:39.293 11:59:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:39.293 11:59:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:39.293 11:59:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:39.293 11:59:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:39.293 11:59:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:39.293 11:59:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:39.293 11:59:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:39.293 11:59:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:39.293 11:59:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:39.293 11:59:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.293 11:59:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:39.293 11:59:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.293 11:59:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.293 11:59:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.293 11:59:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:39.293 "name": "raid_bdev1", 00:07:39.293 "uuid": "eaa8c854-ebc9-4780-a84b-368ee3521cd1", 00:07:39.293 "strip_size_kb": 64, 00:07:39.293 "state": "online", 00:07:39.293 "raid_level": "raid0", 00:07:39.293 "superblock": true, 00:07:39.293 "num_base_bdevs": 2, 00:07:39.293 "num_base_bdevs_discovered": 2, 00:07:39.293 "num_base_bdevs_operational": 2, 00:07:39.293 "base_bdevs_list": [ 00:07:39.293 { 00:07:39.293 "name": "BaseBdev1", 00:07:39.293 "uuid": "820db0fb-8aca-596e-9076-a08ec9cd3ae3", 00:07:39.293 "is_configured": true, 00:07:39.293 "data_offset": 2048, 00:07:39.293 "data_size": 63488 00:07:39.293 }, 00:07:39.293 { 00:07:39.293 "name": "BaseBdev2", 00:07:39.293 "uuid": "5d25afa8-e091-50ac-a898-e638087c9e93", 00:07:39.293 "is_configured": true, 00:07:39.293 "data_offset": 2048, 00:07:39.293 "data_size": 63488 00:07:39.293 } 00:07:39.293 ] 00:07:39.293 }' 00:07:39.293 11:59:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:39.293 11:59:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.552 11:59:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:39.552 11:59:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.552 11:59:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.552 [2024-11-19 11:59:42.889437] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:39.552 [2024-11-19 11:59:42.889594] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:39.552 [2024-11-19 11:59:42.892302] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:39.552 [2024-11-19 11:59:42.892389] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:39.552 [2024-11-19 11:59:42.892441] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:39.552 [2024-11-19 11:59:42.892495] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:39.552 { 00:07:39.552 "results": [ 00:07:39.552 { 00:07:39.552 "job": "raid_bdev1", 00:07:39.552 "core_mask": "0x1", 00:07:39.552 "workload": "randrw", 00:07:39.552 "percentage": 50, 00:07:39.552 "status": "finished", 00:07:39.552 "queue_depth": 1, 00:07:39.552 "io_size": 131072, 00:07:39.552 "runtime": 1.387372, 00:07:39.552 "iops": 16144.192040779257, 00:07:39.552 "mibps": 2018.024005097407, 00:07:39.552 "io_failed": 1, 00:07:39.552 "io_timeout": 0, 00:07:39.552 "avg_latency_us": 86.24236538944055, 00:07:39.552 "min_latency_us": 25.7117903930131, 00:07:39.552 "max_latency_us": 1445.2262008733624 00:07:39.552 } 00:07:39.552 ], 00:07:39.552 "core_count": 1 00:07:39.552 } 00:07:39.552 11:59:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.552 11:59:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61474 00:07:39.552 11:59:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 61474 ']' 00:07:39.552 11:59:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 61474 00:07:39.552 11:59:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:07:39.552 11:59:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:39.552 11:59:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61474 00:07:39.812 11:59:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:39.812 killing process with pid 61474 00:07:39.812 11:59:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:39.812 11:59:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61474' 00:07:39.812 11:59:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 61474 00:07:39.812 [2024-11-19 11:59:42.940481] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:39.812 11:59:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 61474 00:07:39.812 [2024-11-19 11:59:43.074318] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:41.191 11:59:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.tgkCwF5FRw 00:07:41.191 11:59:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:41.191 11:59:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:41.191 11:59:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:07:41.191 11:59:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:41.191 11:59:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:41.191 11:59:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:41.191 11:59:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:07:41.191 00:07:41.191 real 0m4.382s 00:07:41.192 user 0m5.242s 00:07:41.192 sys 0m0.573s 00:07:41.192 11:59:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:41.192 ************************************ 00:07:41.192 END TEST raid_read_error_test 00:07:41.192 ************************************ 00:07:41.192 11:59:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.192 11:59:44 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:07:41.192 11:59:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:41.192 11:59:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.192 11:59:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:41.192 ************************************ 00:07:41.192 START TEST raid_write_error_test 00:07:41.192 ************************************ 00:07:41.192 11:59:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:07:41.192 11:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:41.192 11:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:41.192 11:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:41.192 11:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:41.192 11:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:41.192 11:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:41.192 11:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:41.192 11:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:41.192 11:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:41.192 11:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:41.192 11:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:41.192 11:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:41.192 11:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:41.192 11:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:41.192 11:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:41.192 11:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:41.192 11:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:41.192 11:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:41.192 11:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:41.192 11:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:41.192 11:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:41.192 11:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:41.192 11:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.C00kSSCaIF 00:07:41.192 11:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61614 00:07:41.192 11:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61614 00:07:41.192 11:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:41.192 11:59:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 61614 ']' 00:07:41.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.192 11:59:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.192 11:59:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:41.192 11:59:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.192 11:59:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:41.192 11:59:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.192 [2024-11-19 11:59:44.443870] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:07:41.192 [2024-11-19 11:59:44.444766] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61614 ] 00:07:41.451 [2024-11-19 11:59:44.630202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.451 [2024-11-19 11:59:44.769830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.710 [2024-11-19 11:59:45.013056] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:41.710 [2024-11-19 11:59:45.013225] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:41.972 11:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:41.972 11:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:41.972 11:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:41.972 11:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:41.972 11:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.972 11:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.972 BaseBdev1_malloc 00:07:41.972 11:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.972 11:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:41.972 11:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.972 11:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.251 true 00:07:42.251 11:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.251 11:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:42.251 11:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.251 11:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.251 [2024-11-19 11:59:45.355120] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:42.251 [2024-11-19 11:59:45.355302] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:42.251 [2024-11-19 11:59:45.355349] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:42.251 [2024-11-19 11:59:45.355385] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:42.251 [2024-11-19 11:59:45.357632] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:42.251 [2024-11-19 11:59:45.357707] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:42.251 BaseBdev1 00:07:42.251 11:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.251 11:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:42.251 11:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:42.251 11:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.251 11:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.251 BaseBdev2_malloc 00:07:42.251 11:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.251 11:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:42.251 11:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.251 11:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.251 true 00:07:42.251 11:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.251 11:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:42.251 11:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.251 11:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.251 [2024-11-19 11:59:45.421803] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:42.251 [2024-11-19 11:59:45.421977] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:42.251 [2024-11-19 11:59:45.422023] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:42.251 [2024-11-19 11:59:45.422066] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:42.251 [2024-11-19 11:59:45.424251] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:42.251 [2024-11-19 11:59:45.424345] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:42.251 BaseBdev2 00:07:42.251 11:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.251 11:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:42.251 11:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.251 11:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.251 [2024-11-19 11:59:45.433853] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:42.251 [2024-11-19 11:59:45.435795] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:42.251 [2024-11-19 11:59:45.436074] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:42.251 [2024-11-19 11:59:45.436138] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:42.251 [2024-11-19 11:59:45.436442] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:42.251 [2024-11-19 11:59:45.436694] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:42.251 [2024-11-19 11:59:45.436739] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:42.251 [2024-11-19 11:59:45.436972] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:42.251 11:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.251 11:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:42.251 11:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:42.251 11:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:42.251 11:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:42.251 11:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:42.251 11:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:42.251 11:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:42.251 11:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:42.251 11:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:42.251 11:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:42.251 11:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.251 11:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:42.251 11:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.251 11:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.251 11:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.251 11:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:42.251 "name": "raid_bdev1", 00:07:42.251 "uuid": "3020f44a-066e-4541-b746-5b6f21f79ce9", 00:07:42.251 "strip_size_kb": 64, 00:07:42.251 "state": "online", 00:07:42.252 "raid_level": "raid0", 00:07:42.252 "superblock": true, 00:07:42.252 "num_base_bdevs": 2, 00:07:42.252 "num_base_bdevs_discovered": 2, 00:07:42.252 "num_base_bdevs_operational": 2, 00:07:42.252 "base_bdevs_list": [ 00:07:42.252 { 00:07:42.252 "name": "BaseBdev1", 00:07:42.252 "uuid": "df227032-0352-5ec5-998f-88a176c9f9a2", 00:07:42.252 "is_configured": true, 00:07:42.252 "data_offset": 2048, 00:07:42.252 "data_size": 63488 00:07:42.252 }, 00:07:42.252 { 00:07:42.252 "name": "BaseBdev2", 00:07:42.252 "uuid": "2b8fc4c9-4b68-5099-a383-bbdc2f8a4ec0", 00:07:42.252 "is_configured": true, 00:07:42.252 "data_offset": 2048, 00:07:42.252 "data_size": 63488 00:07:42.252 } 00:07:42.252 ] 00:07:42.252 }' 00:07:42.252 11:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:42.252 11:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.510 11:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:42.510 11:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:42.767 [2024-11-19 11:59:45.942769] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:43.703 11:59:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:43.703 11:59:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.703 11:59:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.703 11:59:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.703 11:59:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:43.703 11:59:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:43.703 11:59:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:43.704 11:59:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:43.704 11:59:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:43.704 11:59:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:43.704 11:59:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:43.704 11:59:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:43.704 11:59:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:43.704 11:59:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:43.704 11:59:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:43.704 11:59:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:43.704 11:59:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:43.704 11:59:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.704 11:59:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:43.704 11:59:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.704 11:59:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.704 11:59:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.704 11:59:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:43.704 "name": "raid_bdev1", 00:07:43.704 "uuid": "3020f44a-066e-4541-b746-5b6f21f79ce9", 00:07:43.704 "strip_size_kb": 64, 00:07:43.704 "state": "online", 00:07:43.704 "raid_level": "raid0", 00:07:43.704 "superblock": true, 00:07:43.704 "num_base_bdevs": 2, 00:07:43.704 "num_base_bdevs_discovered": 2, 00:07:43.704 "num_base_bdevs_operational": 2, 00:07:43.704 "base_bdevs_list": [ 00:07:43.704 { 00:07:43.704 "name": "BaseBdev1", 00:07:43.704 "uuid": "df227032-0352-5ec5-998f-88a176c9f9a2", 00:07:43.704 "is_configured": true, 00:07:43.704 "data_offset": 2048, 00:07:43.704 "data_size": 63488 00:07:43.704 }, 00:07:43.704 { 00:07:43.704 "name": "BaseBdev2", 00:07:43.704 "uuid": "2b8fc4c9-4b68-5099-a383-bbdc2f8a4ec0", 00:07:43.704 "is_configured": true, 00:07:43.704 "data_offset": 2048, 00:07:43.704 "data_size": 63488 00:07:43.704 } 00:07:43.704 ] 00:07:43.704 }' 00:07:43.704 11:59:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:43.704 11:59:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.964 11:59:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:43.964 11:59:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.964 11:59:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.964 [2024-11-19 11:59:47.298908] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:43.964 [2024-11-19 11:59:47.299071] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:43.964 [2024-11-19 11:59:47.301705] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:43.964 [2024-11-19 11:59:47.301785] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:43.964 [2024-11-19 11:59:47.301834] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:43.964 [2024-11-19 11:59:47.301875] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:43.964 { 00:07:43.964 "results": [ 00:07:43.964 { 00:07:43.964 "job": "raid_bdev1", 00:07:43.964 "core_mask": "0x1", 00:07:43.964 "workload": "randrw", 00:07:43.964 "percentage": 50, 00:07:43.964 "status": "finished", 00:07:43.964 "queue_depth": 1, 00:07:43.964 "io_size": 131072, 00:07:43.964 "runtime": 1.356545, 00:07:43.964 "iops": 14822.950952603858, 00:07:43.964 "mibps": 1852.8688690754823, 00:07:43.964 "io_failed": 1, 00:07:43.964 "io_timeout": 0, 00:07:43.964 "avg_latency_us": 93.73808464393076, 00:07:43.964 "min_latency_us": 26.047161572052403, 00:07:43.964 "max_latency_us": 1387.989519650655 00:07:43.964 } 00:07:43.964 ], 00:07:43.964 "core_count": 1 00:07:43.964 } 00:07:43.964 11:59:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.964 11:59:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61614 00:07:43.964 11:59:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 61614 ']' 00:07:43.964 11:59:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 61614 00:07:43.964 11:59:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:07:43.964 11:59:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:43.964 11:59:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61614 00:07:44.223 11:59:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:44.223 killing process with pid 61614 00:07:44.223 11:59:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:44.223 11:59:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61614' 00:07:44.223 11:59:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 61614 00:07:44.223 [2024-11-19 11:59:47.351404] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:44.223 11:59:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 61614 00:07:44.223 [2024-11-19 11:59:47.489705] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:45.606 11:59:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:45.606 11:59:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.C00kSSCaIF 00:07:45.606 11:59:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:45.606 11:59:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:07:45.606 11:59:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:45.606 11:59:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:45.606 11:59:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:45.606 11:59:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:07:45.606 ************************************ 00:07:45.606 END TEST raid_write_error_test 00:07:45.606 ************************************ 00:07:45.606 00:07:45.606 real 0m4.324s 00:07:45.606 user 0m5.182s 00:07:45.606 sys 0m0.537s 00:07:45.606 11:59:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.606 11:59:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.606 11:59:48 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:45.606 11:59:48 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:07:45.606 11:59:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:45.606 11:59:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.606 11:59:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:45.606 ************************************ 00:07:45.606 START TEST raid_state_function_test 00:07:45.606 ************************************ 00:07:45.606 11:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:07:45.606 11:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:45.606 11:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:45.606 11:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:45.606 11:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:45.606 11:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:45.606 11:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:45.606 11:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:45.606 11:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:45.606 11:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:45.606 11:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:45.606 11:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:45.606 11:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:45.606 11:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:45.606 11:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:45.606 11:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:45.606 11:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:45.606 11:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:45.606 11:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:45.606 11:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:45.606 11:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:45.606 11:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:45.606 11:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:45.606 11:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:45.606 11:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61758 00:07:45.606 11:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:45.606 Process raid pid: 61758 00:07:45.606 11:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61758' 00:07:45.606 11:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61758 00:07:45.606 11:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 61758 ']' 00:07:45.606 11:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.606 11:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:45.606 11:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.606 11:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:45.606 11:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.606 [2024-11-19 11:59:48.814960] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:07:45.606 [2024-11-19 11:59:48.815237] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:45.866 [2024-11-19 11:59:48.988608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.866 [2024-11-19 11:59:49.110783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.123 [2024-11-19 11:59:49.322481] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:46.123 [2024-11-19 11:59:49.322625] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:46.381 11:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:46.381 11:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:46.381 11:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:46.381 11:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.381 11:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.381 [2024-11-19 11:59:49.664978] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:46.381 [2024-11-19 11:59:49.665047] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:46.381 [2024-11-19 11:59:49.665058] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:46.381 [2024-11-19 11:59:49.665067] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:46.381 11:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.381 11:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:46.381 11:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:46.381 11:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:46.381 11:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:46.381 11:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:46.381 11:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:46.381 11:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:46.381 11:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:46.381 11:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:46.381 11:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:46.381 11:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.381 11:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.381 11:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.381 11:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:46.381 11:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.381 11:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:46.381 "name": "Existed_Raid", 00:07:46.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.381 "strip_size_kb": 64, 00:07:46.381 "state": "configuring", 00:07:46.381 "raid_level": "concat", 00:07:46.381 "superblock": false, 00:07:46.381 "num_base_bdevs": 2, 00:07:46.381 "num_base_bdevs_discovered": 0, 00:07:46.381 "num_base_bdevs_operational": 2, 00:07:46.381 "base_bdevs_list": [ 00:07:46.381 { 00:07:46.381 "name": "BaseBdev1", 00:07:46.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.381 "is_configured": false, 00:07:46.381 "data_offset": 0, 00:07:46.381 "data_size": 0 00:07:46.381 }, 00:07:46.381 { 00:07:46.381 "name": "BaseBdev2", 00:07:46.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.381 "is_configured": false, 00:07:46.381 "data_offset": 0, 00:07:46.381 "data_size": 0 00:07:46.381 } 00:07:46.381 ] 00:07:46.381 }' 00:07:46.381 11:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:46.381 11:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.952 11:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:46.952 11:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.952 11:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.952 [2024-11-19 11:59:50.084239] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:46.952 [2024-11-19 11:59:50.084281] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:46.952 11:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.952 11:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:46.952 11:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.952 11:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.952 [2024-11-19 11:59:50.092214] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:46.952 [2024-11-19 11:59:50.092265] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:46.952 [2024-11-19 11:59:50.092275] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:46.952 [2024-11-19 11:59:50.092289] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:46.952 11:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.952 11:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:46.952 11:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.952 11:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.952 [2024-11-19 11:59:50.135750] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:46.952 BaseBdev1 00:07:46.952 11:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.952 11:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:46.952 11:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:46.952 11:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:46.952 11:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:46.952 11:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:46.952 11:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:46.952 11:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:46.952 11:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.952 11:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.952 11:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.952 11:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:46.952 11:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.952 11:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.952 [ 00:07:46.952 { 00:07:46.952 "name": "BaseBdev1", 00:07:46.952 "aliases": [ 00:07:46.952 "b031109d-109e-4961-a91c-f01d82152fba" 00:07:46.952 ], 00:07:46.952 "product_name": "Malloc disk", 00:07:46.952 "block_size": 512, 00:07:46.952 "num_blocks": 65536, 00:07:46.952 "uuid": "b031109d-109e-4961-a91c-f01d82152fba", 00:07:46.952 "assigned_rate_limits": { 00:07:46.952 "rw_ios_per_sec": 0, 00:07:46.952 "rw_mbytes_per_sec": 0, 00:07:46.952 "r_mbytes_per_sec": 0, 00:07:46.952 "w_mbytes_per_sec": 0 00:07:46.952 }, 00:07:46.952 "claimed": true, 00:07:46.952 "claim_type": "exclusive_write", 00:07:46.952 "zoned": false, 00:07:46.952 "supported_io_types": { 00:07:46.952 "read": true, 00:07:46.952 "write": true, 00:07:46.952 "unmap": true, 00:07:46.952 "flush": true, 00:07:46.952 "reset": true, 00:07:46.952 "nvme_admin": false, 00:07:46.952 "nvme_io": false, 00:07:46.952 "nvme_io_md": false, 00:07:46.952 "write_zeroes": true, 00:07:46.952 "zcopy": true, 00:07:46.952 "get_zone_info": false, 00:07:46.952 "zone_management": false, 00:07:46.952 "zone_append": false, 00:07:46.952 "compare": false, 00:07:46.952 "compare_and_write": false, 00:07:46.952 "abort": true, 00:07:46.952 "seek_hole": false, 00:07:46.952 "seek_data": false, 00:07:46.952 "copy": true, 00:07:46.952 "nvme_iov_md": false 00:07:46.952 }, 00:07:46.952 "memory_domains": [ 00:07:46.952 { 00:07:46.952 "dma_device_id": "system", 00:07:46.952 "dma_device_type": 1 00:07:46.952 }, 00:07:46.952 { 00:07:46.952 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.952 "dma_device_type": 2 00:07:46.952 } 00:07:46.952 ], 00:07:46.952 "driver_specific": {} 00:07:46.952 } 00:07:46.952 ] 00:07:46.952 11:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.952 11:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:46.952 11:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:46.952 11:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:46.952 11:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:46.952 11:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:46.952 11:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:46.952 11:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:46.952 11:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:46.952 11:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:46.952 11:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:46.952 11:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:46.952 11:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.952 11:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:46.952 11:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.952 11:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.952 11:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.952 11:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:46.952 "name": "Existed_Raid", 00:07:46.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.952 "strip_size_kb": 64, 00:07:46.952 "state": "configuring", 00:07:46.952 "raid_level": "concat", 00:07:46.952 "superblock": false, 00:07:46.952 "num_base_bdevs": 2, 00:07:46.952 "num_base_bdevs_discovered": 1, 00:07:46.952 "num_base_bdevs_operational": 2, 00:07:46.952 "base_bdevs_list": [ 00:07:46.952 { 00:07:46.952 "name": "BaseBdev1", 00:07:46.952 "uuid": "b031109d-109e-4961-a91c-f01d82152fba", 00:07:46.952 "is_configured": true, 00:07:46.952 "data_offset": 0, 00:07:46.952 "data_size": 65536 00:07:46.952 }, 00:07:46.952 { 00:07:46.952 "name": "BaseBdev2", 00:07:46.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.952 "is_configured": false, 00:07:46.952 "data_offset": 0, 00:07:46.952 "data_size": 0 00:07:46.952 } 00:07:46.952 ] 00:07:46.952 }' 00:07:46.952 11:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:46.953 11:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.522 11:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:47.522 11:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.522 11:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.522 [2024-11-19 11:59:50.607069] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:47.522 [2024-11-19 11:59:50.607220] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:47.522 11:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.522 11:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:47.522 11:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.522 11:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.522 [2024-11-19 11:59:50.619124] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:47.522 [2024-11-19 11:59:50.621064] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:47.522 [2024-11-19 11:59:50.621168] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:47.522 11:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.522 11:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:47.522 11:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:47.522 11:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:47.522 11:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:47.522 11:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:47.522 11:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:47.522 11:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:47.522 11:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:47.522 11:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:47.522 11:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:47.522 11:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:47.522 11:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:47.522 11:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.522 11:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:47.522 11:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.522 11:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.522 11:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.522 11:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:47.522 "name": "Existed_Raid", 00:07:47.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:47.522 "strip_size_kb": 64, 00:07:47.522 "state": "configuring", 00:07:47.522 "raid_level": "concat", 00:07:47.522 "superblock": false, 00:07:47.522 "num_base_bdevs": 2, 00:07:47.522 "num_base_bdevs_discovered": 1, 00:07:47.522 "num_base_bdevs_operational": 2, 00:07:47.522 "base_bdevs_list": [ 00:07:47.522 { 00:07:47.522 "name": "BaseBdev1", 00:07:47.522 "uuid": "b031109d-109e-4961-a91c-f01d82152fba", 00:07:47.522 "is_configured": true, 00:07:47.522 "data_offset": 0, 00:07:47.522 "data_size": 65536 00:07:47.522 }, 00:07:47.522 { 00:07:47.522 "name": "BaseBdev2", 00:07:47.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:47.522 "is_configured": false, 00:07:47.522 "data_offset": 0, 00:07:47.522 "data_size": 0 00:07:47.522 } 00:07:47.522 ] 00:07:47.522 }' 00:07:47.522 11:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:47.522 11:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.783 11:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:47.783 11:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.783 11:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.783 [2024-11-19 11:59:51.051905] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:47.783 [2024-11-19 11:59:51.052059] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:47.783 [2024-11-19 11:59:51.052076] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:47.783 [2024-11-19 11:59:51.052381] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:47.783 [2024-11-19 11:59:51.052535] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:47.783 [2024-11-19 11:59:51.052549] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:47.783 [2024-11-19 11:59:51.052840] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:47.783 BaseBdev2 00:07:47.783 11:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.783 11:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:47.783 11:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:47.783 11:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:47.783 11:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:47.783 11:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:47.783 11:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:47.783 11:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:47.783 11:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.783 11:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.783 11:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.783 11:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:47.783 11:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.783 11:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.783 [ 00:07:47.783 { 00:07:47.783 "name": "BaseBdev2", 00:07:47.783 "aliases": [ 00:07:47.783 "74a9cbef-0a8c-42e5-bb46-94b35963a9c0" 00:07:47.783 ], 00:07:47.783 "product_name": "Malloc disk", 00:07:47.783 "block_size": 512, 00:07:47.783 "num_blocks": 65536, 00:07:47.783 "uuid": "74a9cbef-0a8c-42e5-bb46-94b35963a9c0", 00:07:47.783 "assigned_rate_limits": { 00:07:47.783 "rw_ios_per_sec": 0, 00:07:47.783 "rw_mbytes_per_sec": 0, 00:07:47.783 "r_mbytes_per_sec": 0, 00:07:47.783 "w_mbytes_per_sec": 0 00:07:47.783 }, 00:07:47.783 "claimed": true, 00:07:47.783 "claim_type": "exclusive_write", 00:07:47.783 "zoned": false, 00:07:47.783 "supported_io_types": { 00:07:47.783 "read": true, 00:07:47.783 "write": true, 00:07:47.783 "unmap": true, 00:07:47.783 "flush": true, 00:07:47.783 "reset": true, 00:07:47.783 "nvme_admin": false, 00:07:47.783 "nvme_io": false, 00:07:47.783 "nvme_io_md": false, 00:07:47.783 "write_zeroes": true, 00:07:47.783 "zcopy": true, 00:07:47.783 "get_zone_info": false, 00:07:47.783 "zone_management": false, 00:07:47.783 "zone_append": false, 00:07:47.783 "compare": false, 00:07:47.783 "compare_and_write": false, 00:07:47.783 "abort": true, 00:07:47.783 "seek_hole": false, 00:07:47.783 "seek_data": false, 00:07:47.783 "copy": true, 00:07:47.783 "nvme_iov_md": false 00:07:47.783 }, 00:07:47.783 "memory_domains": [ 00:07:47.783 { 00:07:47.783 "dma_device_id": "system", 00:07:47.783 "dma_device_type": 1 00:07:47.783 }, 00:07:47.783 { 00:07:47.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:47.783 "dma_device_type": 2 00:07:47.783 } 00:07:47.783 ], 00:07:47.783 "driver_specific": {} 00:07:47.783 } 00:07:47.783 ] 00:07:47.783 11:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.783 11:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:47.783 11:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:47.783 11:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:47.783 11:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:47.783 11:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:47.784 11:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:47.784 11:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:47.784 11:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:47.784 11:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:47.784 11:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:47.784 11:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:47.784 11:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:47.784 11:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:47.784 11:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.784 11:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:47.784 11:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.784 11:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.784 11:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.784 11:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:47.784 "name": "Existed_Raid", 00:07:47.784 "uuid": "7571d7a1-230e-43b0-a647-0c12b6abfd69", 00:07:47.784 "strip_size_kb": 64, 00:07:47.784 "state": "online", 00:07:47.784 "raid_level": "concat", 00:07:47.784 "superblock": false, 00:07:47.784 "num_base_bdevs": 2, 00:07:47.784 "num_base_bdevs_discovered": 2, 00:07:47.784 "num_base_bdevs_operational": 2, 00:07:47.784 "base_bdevs_list": [ 00:07:47.784 { 00:07:47.784 "name": "BaseBdev1", 00:07:47.784 "uuid": "b031109d-109e-4961-a91c-f01d82152fba", 00:07:47.784 "is_configured": true, 00:07:47.784 "data_offset": 0, 00:07:47.784 "data_size": 65536 00:07:47.784 }, 00:07:47.784 { 00:07:47.784 "name": "BaseBdev2", 00:07:47.784 "uuid": "74a9cbef-0a8c-42e5-bb46-94b35963a9c0", 00:07:47.784 "is_configured": true, 00:07:47.784 "data_offset": 0, 00:07:47.784 "data_size": 65536 00:07:47.784 } 00:07:47.784 ] 00:07:47.784 }' 00:07:47.784 11:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:47.784 11:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.354 11:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:48.354 11:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:48.354 11:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:48.354 11:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:48.354 11:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:48.354 11:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:48.354 11:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:48.354 11:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:48.354 11:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.354 11:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.354 [2024-11-19 11:59:51.507493] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:48.354 11:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.354 11:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:48.354 "name": "Existed_Raid", 00:07:48.354 "aliases": [ 00:07:48.354 "7571d7a1-230e-43b0-a647-0c12b6abfd69" 00:07:48.354 ], 00:07:48.354 "product_name": "Raid Volume", 00:07:48.354 "block_size": 512, 00:07:48.354 "num_blocks": 131072, 00:07:48.354 "uuid": "7571d7a1-230e-43b0-a647-0c12b6abfd69", 00:07:48.354 "assigned_rate_limits": { 00:07:48.354 "rw_ios_per_sec": 0, 00:07:48.354 "rw_mbytes_per_sec": 0, 00:07:48.354 "r_mbytes_per_sec": 0, 00:07:48.354 "w_mbytes_per_sec": 0 00:07:48.354 }, 00:07:48.354 "claimed": false, 00:07:48.354 "zoned": false, 00:07:48.354 "supported_io_types": { 00:07:48.354 "read": true, 00:07:48.354 "write": true, 00:07:48.354 "unmap": true, 00:07:48.354 "flush": true, 00:07:48.354 "reset": true, 00:07:48.354 "nvme_admin": false, 00:07:48.354 "nvme_io": false, 00:07:48.354 "nvme_io_md": false, 00:07:48.354 "write_zeroes": true, 00:07:48.354 "zcopy": false, 00:07:48.354 "get_zone_info": false, 00:07:48.354 "zone_management": false, 00:07:48.354 "zone_append": false, 00:07:48.354 "compare": false, 00:07:48.354 "compare_and_write": false, 00:07:48.354 "abort": false, 00:07:48.354 "seek_hole": false, 00:07:48.354 "seek_data": false, 00:07:48.354 "copy": false, 00:07:48.354 "nvme_iov_md": false 00:07:48.354 }, 00:07:48.354 "memory_domains": [ 00:07:48.354 { 00:07:48.354 "dma_device_id": "system", 00:07:48.354 "dma_device_type": 1 00:07:48.354 }, 00:07:48.354 { 00:07:48.354 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:48.354 "dma_device_type": 2 00:07:48.354 }, 00:07:48.354 { 00:07:48.354 "dma_device_id": "system", 00:07:48.354 "dma_device_type": 1 00:07:48.354 }, 00:07:48.354 { 00:07:48.354 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:48.354 "dma_device_type": 2 00:07:48.354 } 00:07:48.354 ], 00:07:48.354 "driver_specific": { 00:07:48.354 "raid": { 00:07:48.354 "uuid": "7571d7a1-230e-43b0-a647-0c12b6abfd69", 00:07:48.354 "strip_size_kb": 64, 00:07:48.354 "state": "online", 00:07:48.354 "raid_level": "concat", 00:07:48.354 "superblock": false, 00:07:48.354 "num_base_bdevs": 2, 00:07:48.354 "num_base_bdevs_discovered": 2, 00:07:48.354 "num_base_bdevs_operational": 2, 00:07:48.354 "base_bdevs_list": [ 00:07:48.354 { 00:07:48.354 "name": "BaseBdev1", 00:07:48.354 "uuid": "b031109d-109e-4961-a91c-f01d82152fba", 00:07:48.354 "is_configured": true, 00:07:48.354 "data_offset": 0, 00:07:48.354 "data_size": 65536 00:07:48.354 }, 00:07:48.354 { 00:07:48.354 "name": "BaseBdev2", 00:07:48.354 "uuid": "74a9cbef-0a8c-42e5-bb46-94b35963a9c0", 00:07:48.354 "is_configured": true, 00:07:48.354 "data_offset": 0, 00:07:48.354 "data_size": 65536 00:07:48.354 } 00:07:48.354 ] 00:07:48.354 } 00:07:48.354 } 00:07:48.354 }' 00:07:48.354 11:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:48.354 11:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:48.354 BaseBdev2' 00:07:48.354 11:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:48.354 11:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:48.354 11:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:48.354 11:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:48.355 11:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:48.355 11:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.355 11:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.355 11:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.355 11:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:48.355 11:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:48.355 11:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:48.355 11:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:48.355 11:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:48.355 11:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.355 11:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.355 11:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.355 11:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:48.355 11:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:48.355 11:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:48.355 11:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.355 11:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.355 [2024-11-19 11:59:51.722902] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:48.355 [2024-11-19 11:59:51.722944] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:48.355 [2024-11-19 11:59:51.723026] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:48.615 11:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.615 11:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:48.615 11:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:48.615 11:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:48.615 11:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:48.615 11:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:48.615 11:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:48.615 11:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:48.615 11:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:48.615 11:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:48.615 11:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:48.615 11:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:48.615 11:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:48.615 11:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:48.615 11:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:48.615 11:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:48.615 11:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.615 11:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:48.615 11:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.615 11:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.615 11:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.615 11:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:48.615 "name": "Existed_Raid", 00:07:48.615 "uuid": "7571d7a1-230e-43b0-a647-0c12b6abfd69", 00:07:48.615 "strip_size_kb": 64, 00:07:48.615 "state": "offline", 00:07:48.615 "raid_level": "concat", 00:07:48.615 "superblock": false, 00:07:48.615 "num_base_bdevs": 2, 00:07:48.615 "num_base_bdevs_discovered": 1, 00:07:48.615 "num_base_bdevs_operational": 1, 00:07:48.615 "base_bdevs_list": [ 00:07:48.615 { 00:07:48.615 "name": null, 00:07:48.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:48.615 "is_configured": false, 00:07:48.615 "data_offset": 0, 00:07:48.615 "data_size": 65536 00:07:48.615 }, 00:07:48.615 { 00:07:48.615 "name": "BaseBdev2", 00:07:48.615 "uuid": "74a9cbef-0a8c-42e5-bb46-94b35963a9c0", 00:07:48.615 "is_configured": true, 00:07:48.615 "data_offset": 0, 00:07:48.615 "data_size": 65536 00:07:48.615 } 00:07:48.615 ] 00:07:48.615 }' 00:07:48.615 11:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:48.615 11:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.185 11:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:49.185 11:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:49.185 11:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:49.185 11:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:49.185 11:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.185 11:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.185 11:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.185 11:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:49.185 11:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:49.185 11:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:49.185 11:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.185 11:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.185 [2024-11-19 11:59:52.312223] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:49.185 [2024-11-19 11:59:52.312289] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:49.185 11:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.185 11:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:49.185 11:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:49.185 11:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:49.185 11:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.185 11:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.185 11:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:49.185 11:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.185 11:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:49.185 11:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:49.185 11:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:49.185 11:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61758 00:07:49.185 11:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 61758 ']' 00:07:49.185 11:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 61758 00:07:49.185 11:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:49.185 11:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:49.185 11:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61758 00:07:49.185 11:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:49.185 11:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:49.185 11:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61758' 00:07:49.185 killing process with pid 61758 00:07:49.185 11:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 61758 00:07:49.185 [2024-11-19 11:59:52.513998] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:49.185 11:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 61758 00:07:49.185 [2024-11-19 11:59:52.530430] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:50.562 11:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:50.562 00:07:50.562 real 0m4.914s 00:07:50.562 user 0m7.010s 00:07:50.562 sys 0m0.829s 00:07:50.562 11:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:50.562 11:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.562 ************************************ 00:07:50.562 END TEST raid_state_function_test 00:07:50.562 ************************************ 00:07:50.562 11:59:53 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:07:50.562 11:59:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:50.562 11:59:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.562 11:59:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:50.562 ************************************ 00:07:50.562 START TEST raid_state_function_test_sb 00:07:50.562 ************************************ 00:07:50.562 11:59:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:07:50.562 11:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:50.562 11:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:50.562 11:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:50.562 11:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:50.562 11:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:50.562 11:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:50.562 11:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:50.562 11:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:50.562 11:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:50.562 11:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:50.562 11:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:50.562 11:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:50.562 11:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:50.562 11:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:50.562 11:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:50.562 11:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:50.562 11:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:50.562 11:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:50.562 11:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:50.562 11:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:50.562 11:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:50.562 11:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:50.562 11:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:50.562 Process raid pid: 62005 00:07:50.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.562 11:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62005 00:07:50.562 11:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:50.562 11:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62005' 00:07:50.562 11:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62005 00:07:50.562 11:59:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 62005 ']' 00:07:50.562 11:59:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.562 11:59:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:50.562 11:59:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.562 11:59:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:50.562 11:59:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.562 [2024-11-19 11:59:53.763294] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:07:50.562 [2024-11-19 11:59:53.763473] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:50.821 [2024-11-19 11:59:53.939739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.821 [2024-11-19 11:59:54.113788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.078 [2024-11-19 11:59:54.380058] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:51.078 [2024-11-19 11:59:54.380234] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:51.642 11:59:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:51.642 11:59:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:51.642 11:59:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:51.642 11:59:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.642 11:59:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.642 [2024-11-19 11:59:54.814344] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:51.642 [2024-11-19 11:59:54.814521] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:51.642 [2024-11-19 11:59:54.814568] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:51.642 [2024-11-19 11:59:54.814612] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:51.642 11:59:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.642 11:59:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:51.642 11:59:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:51.642 11:59:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:51.642 11:59:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:51.642 11:59:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:51.642 11:59:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:51.642 11:59:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:51.642 11:59:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:51.642 11:59:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:51.642 11:59:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:51.642 11:59:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.642 11:59:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:51.642 11:59:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.642 11:59:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.642 11:59:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.642 11:59:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:51.642 "name": "Existed_Raid", 00:07:51.642 "uuid": "122a4921-264d-4bbd-9e9b-1289736a9360", 00:07:51.642 "strip_size_kb": 64, 00:07:51.642 "state": "configuring", 00:07:51.642 "raid_level": "concat", 00:07:51.642 "superblock": true, 00:07:51.642 "num_base_bdevs": 2, 00:07:51.642 "num_base_bdevs_discovered": 0, 00:07:51.642 "num_base_bdevs_operational": 2, 00:07:51.642 "base_bdevs_list": [ 00:07:51.642 { 00:07:51.642 "name": "BaseBdev1", 00:07:51.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.642 "is_configured": false, 00:07:51.642 "data_offset": 0, 00:07:51.642 "data_size": 0 00:07:51.642 }, 00:07:51.642 { 00:07:51.642 "name": "BaseBdev2", 00:07:51.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.642 "is_configured": false, 00:07:51.642 "data_offset": 0, 00:07:51.642 "data_size": 0 00:07:51.642 } 00:07:51.642 ] 00:07:51.642 }' 00:07:51.642 11:59:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:51.642 11:59:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.900 11:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:51.900 11:59:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.900 11:59:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.900 [2024-11-19 11:59:55.209694] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:51.900 [2024-11-19 11:59:55.209804] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:51.900 11:59:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.900 11:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:51.900 11:59:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.900 11:59:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.900 [2024-11-19 11:59:55.217683] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:51.900 [2024-11-19 11:59:55.217792] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:51.900 [2024-11-19 11:59:55.217848] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:51.900 [2024-11-19 11:59:55.217902] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:51.900 11:59:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.900 11:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:51.900 11:59:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.900 11:59:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.158 [2024-11-19 11:59:55.278938] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:52.158 BaseBdev1 00:07:52.158 11:59:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.158 11:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:52.158 11:59:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:52.158 11:59:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:52.158 11:59:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:52.158 11:59:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:52.158 11:59:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:52.158 11:59:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:52.158 11:59:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.158 11:59:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.158 11:59:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.158 11:59:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:52.158 11:59:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.158 11:59:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.158 [ 00:07:52.158 { 00:07:52.158 "name": "BaseBdev1", 00:07:52.158 "aliases": [ 00:07:52.158 "6f8c01b4-908b-4905-ad73-f65e9984ac5f" 00:07:52.158 ], 00:07:52.158 "product_name": "Malloc disk", 00:07:52.158 "block_size": 512, 00:07:52.158 "num_blocks": 65536, 00:07:52.158 "uuid": "6f8c01b4-908b-4905-ad73-f65e9984ac5f", 00:07:52.158 "assigned_rate_limits": { 00:07:52.158 "rw_ios_per_sec": 0, 00:07:52.158 "rw_mbytes_per_sec": 0, 00:07:52.158 "r_mbytes_per_sec": 0, 00:07:52.158 "w_mbytes_per_sec": 0 00:07:52.158 }, 00:07:52.158 "claimed": true, 00:07:52.158 "claim_type": "exclusive_write", 00:07:52.158 "zoned": false, 00:07:52.158 "supported_io_types": { 00:07:52.158 "read": true, 00:07:52.158 "write": true, 00:07:52.158 "unmap": true, 00:07:52.158 "flush": true, 00:07:52.158 "reset": true, 00:07:52.158 "nvme_admin": false, 00:07:52.158 "nvme_io": false, 00:07:52.158 "nvme_io_md": false, 00:07:52.158 "write_zeroes": true, 00:07:52.158 "zcopy": true, 00:07:52.158 "get_zone_info": false, 00:07:52.158 "zone_management": false, 00:07:52.158 "zone_append": false, 00:07:52.158 "compare": false, 00:07:52.158 "compare_and_write": false, 00:07:52.158 "abort": true, 00:07:52.158 "seek_hole": false, 00:07:52.158 "seek_data": false, 00:07:52.158 "copy": true, 00:07:52.158 "nvme_iov_md": false 00:07:52.158 }, 00:07:52.158 "memory_domains": [ 00:07:52.158 { 00:07:52.158 "dma_device_id": "system", 00:07:52.158 "dma_device_type": 1 00:07:52.158 }, 00:07:52.158 { 00:07:52.158 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:52.158 "dma_device_type": 2 00:07:52.158 } 00:07:52.158 ], 00:07:52.158 "driver_specific": {} 00:07:52.158 } 00:07:52.158 ] 00:07:52.158 11:59:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.158 11:59:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:52.158 11:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:52.158 11:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:52.158 11:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:52.158 11:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:52.158 11:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:52.158 11:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:52.158 11:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:52.158 11:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:52.158 11:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:52.158 11:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:52.158 11:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.158 11:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:52.158 11:59:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.158 11:59:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.158 11:59:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.158 11:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:52.158 "name": "Existed_Raid", 00:07:52.158 "uuid": "f10f3a31-8d70-49f6-86ed-60cfd627dbcd", 00:07:52.158 "strip_size_kb": 64, 00:07:52.158 "state": "configuring", 00:07:52.158 "raid_level": "concat", 00:07:52.158 "superblock": true, 00:07:52.158 "num_base_bdevs": 2, 00:07:52.158 "num_base_bdevs_discovered": 1, 00:07:52.158 "num_base_bdevs_operational": 2, 00:07:52.158 "base_bdevs_list": [ 00:07:52.158 { 00:07:52.158 "name": "BaseBdev1", 00:07:52.158 "uuid": "6f8c01b4-908b-4905-ad73-f65e9984ac5f", 00:07:52.158 "is_configured": true, 00:07:52.158 "data_offset": 2048, 00:07:52.158 "data_size": 63488 00:07:52.158 }, 00:07:52.158 { 00:07:52.158 "name": "BaseBdev2", 00:07:52.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:52.158 "is_configured": false, 00:07:52.158 "data_offset": 0, 00:07:52.158 "data_size": 0 00:07:52.158 } 00:07:52.158 ] 00:07:52.159 }' 00:07:52.159 11:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:52.159 11:59:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.417 11:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:52.417 11:59:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.417 11:59:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.417 [2024-11-19 11:59:55.706357] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:52.417 [2024-11-19 11:59:55.706521] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:52.417 11:59:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.417 11:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:52.417 11:59:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.417 11:59:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.417 [2024-11-19 11:59:55.714485] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:52.417 [2024-11-19 11:59:55.717766] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:52.417 [2024-11-19 11:59:55.717892] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:52.417 11:59:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.417 11:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:52.417 11:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:52.417 11:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:52.417 11:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:52.417 11:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:52.417 11:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:52.417 11:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:52.417 11:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:52.417 11:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:52.417 11:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:52.417 11:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:52.417 11:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:52.417 11:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.417 11:59:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.417 11:59:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.417 11:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:52.417 11:59:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.417 11:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:52.417 "name": "Existed_Raid", 00:07:52.417 "uuid": "27bd7f39-e55c-4b30-a8e0-0d15c4f93cef", 00:07:52.417 "strip_size_kb": 64, 00:07:52.417 "state": "configuring", 00:07:52.417 "raid_level": "concat", 00:07:52.417 "superblock": true, 00:07:52.417 "num_base_bdevs": 2, 00:07:52.417 "num_base_bdevs_discovered": 1, 00:07:52.417 "num_base_bdevs_operational": 2, 00:07:52.417 "base_bdevs_list": [ 00:07:52.417 { 00:07:52.417 "name": "BaseBdev1", 00:07:52.417 "uuid": "6f8c01b4-908b-4905-ad73-f65e9984ac5f", 00:07:52.417 "is_configured": true, 00:07:52.417 "data_offset": 2048, 00:07:52.417 "data_size": 63488 00:07:52.417 }, 00:07:52.417 { 00:07:52.417 "name": "BaseBdev2", 00:07:52.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:52.417 "is_configured": false, 00:07:52.417 "data_offset": 0, 00:07:52.417 "data_size": 0 00:07:52.417 } 00:07:52.417 ] 00:07:52.417 }' 00:07:52.417 11:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:52.417 11:59:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.983 11:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:52.983 11:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.983 11:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.983 [2024-11-19 11:59:56.149255] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:52.983 [2024-11-19 11:59:56.149670] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:52.983 [2024-11-19 11:59:56.149727] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:52.984 [2024-11-19 11:59:56.150169] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:52.984 [2024-11-19 11:59:56.150380] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:52.984 [2024-11-19 11:59:56.150429] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:52.984 BaseBdev2 00:07:52.984 [2024-11-19 11:59:56.150643] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:52.984 11:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.984 11:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:52.984 11:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:52.984 11:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:52.984 11:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:52.984 11:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:52.984 11:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:52.984 11:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:52.984 11:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.984 11:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.984 11:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.984 11:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:52.984 11:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.984 11:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.984 [ 00:07:52.984 { 00:07:52.984 "name": "BaseBdev2", 00:07:52.984 "aliases": [ 00:07:52.984 "be7f3f61-53ec-4f89-ba78-f5456efd9edf" 00:07:52.984 ], 00:07:52.984 "product_name": "Malloc disk", 00:07:52.984 "block_size": 512, 00:07:52.984 "num_blocks": 65536, 00:07:52.984 "uuid": "be7f3f61-53ec-4f89-ba78-f5456efd9edf", 00:07:52.984 "assigned_rate_limits": { 00:07:52.984 "rw_ios_per_sec": 0, 00:07:52.984 "rw_mbytes_per_sec": 0, 00:07:52.984 "r_mbytes_per_sec": 0, 00:07:52.984 "w_mbytes_per_sec": 0 00:07:52.984 }, 00:07:52.984 "claimed": true, 00:07:52.984 "claim_type": "exclusive_write", 00:07:52.984 "zoned": false, 00:07:52.984 "supported_io_types": { 00:07:52.984 "read": true, 00:07:52.984 "write": true, 00:07:52.984 "unmap": true, 00:07:52.984 "flush": true, 00:07:52.984 "reset": true, 00:07:52.984 "nvme_admin": false, 00:07:52.984 "nvme_io": false, 00:07:52.984 "nvme_io_md": false, 00:07:52.984 "write_zeroes": true, 00:07:52.984 "zcopy": true, 00:07:52.984 "get_zone_info": false, 00:07:52.984 "zone_management": false, 00:07:52.984 "zone_append": false, 00:07:52.984 "compare": false, 00:07:52.984 "compare_and_write": false, 00:07:52.984 "abort": true, 00:07:52.984 "seek_hole": false, 00:07:52.984 "seek_data": false, 00:07:52.984 "copy": true, 00:07:52.984 "nvme_iov_md": false 00:07:52.984 }, 00:07:52.984 "memory_domains": [ 00:07:52.984 { 00:07:52.984 "dma_device_id": "system", 00:07:52.984 "dma_device_type": 1 00:07:52.984 }, 00:07:52.984 { 00:07:52.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:52.984 "dma_device_type": 2 00:07:52.984 } 00:07:52.984 ], 00:07:52.984 "driver_specific": {} 00:07:52.984 } 00:07:52.984 ] 00:07:52.984 11:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.984 11:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:52.984 11:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:52.984 11:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:52.984 11:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:52.984 11:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:52.984 11:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:52.984 11:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:52.984 11:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:52.984 11:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:52.984 11:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:52.984 11:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:52.984 11:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:52.984 11:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:52.984 11:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:52.984 11:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.984 11:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.984 11:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.984 11:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.984 11:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:52.984 "name": "Existed_Raid", 00:07:52.984 "uuid": "27bd7f39-e55c-4b30-a8e0-0d15c4f93cef", 00:07:52.984 "strip_size_kb": 64, 00:07:52.984 "state": "online", 00:07:52.984 "raid_level": "concat", 00:07:52.984 "superblock": true, 00:07:52.984 "num_base_bdevs": 2, 00:07:52.984 "num_base_bdevs_discovered": 2, 00:07:52.984 "num_base_bdevs_operational": 2, 00:07:52.984 "base_bdevs_list": [ 00:07:52.984 { 00:07:52.984 "name": "BaseBdev1", 00:07:52.984 "uuid": "6f8c01b4-908b-4905-ad73-f65e9984ac5f", 00:07:52.984 "is_configured": true, 00:07:52.984 "data_offset": 2048, 00:07:52.984 "data_size": 63488 00:07:52.984 }, 00:07:52.984 { 00:07:52.984 "name": "BaseBdev2", 00:07:52.984 "uuid": "be7f3f61-53ec-4f89-ba78-f5456efd9edf", 00:07:52.984 "is_configured": true, 00:07:52.984 "data_offset": 2048, 00:07:52.984 "data_size": 63488 00:07:52.984 } 00:07:52.984 ] 00:07:52.984 }' 00:07:52.984 11:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:52.984 11:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.552 11:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:53.552 11:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:53.552 11:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:53.552 11:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:53.552 11:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:53.552 11:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:53.552 11:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:53.552 11:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:53.552 11:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.552 11:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.552 [2024-11-19 11:59:56.672727] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:53.552 11:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.552 11:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:53.552 "name": "Existed_Raid", 00:07:53.552 "aliases": [ 00:07:53.552 "27bd7f39-e55c-4b30-a8e0-0d15c4f93cef" 00:07:53.552 ], 00:07:53.552 "product_name": "Raid Volume", 00:07:53.552 "block_size": 512, 00:07:53.552 "num_blocks": 126976, 00:07:53.552 "uuid": "27bd7f39-e55c-4b30-a8e0-0d15c4f93cef", 00:07:53.552 "assigned_rate_limits": { 00:07:53.552 "rw_ios_per_sec": 0, 00:07:53.552 "rw_mbytes_per_sec": 0, 00:07:53.552 "r_mbytes_per_sec": 0, 00:07:53.552 "w_mbytes_per_sec": 0 00:07:53.552 }, 00:07:53.552 "claimed": false, 00:07:53.552 "zoned": false, 00:07:53.552 "supported_io_types": { 00:07:53.552 "read": true, 00:07:53.552 "write": true, 00:07:53.552 "unmap": true, 00:07:53.552 "flush": true, 00:07:53.552 "reset": true, 00:07:53.552 "nvme_admin": false, 00:07:53.552 "nvme_io": false, 00:07:53.552 "nvme_io_md": false, 00:07:53.552 "write_zeroes": true, 00:07:53.552 "zcopy": false, 00:07:53.552 "get_zone_info": false, 00:07:53.552 "zone_management": false, 00:07:53.552 "zone_append": false, 00:07:53.552 "compare": false, 00:07:53.552 "compare_and_write": false, 00:07:53.552 "abort": false, 00:07:53.552 "seek_hole": false, 00:07:53.552 "seek_data": false, 00:07:53.552 "copy": false, 00:07:53.552 "nvme_iov_md": false 00:07:53.552 }, 00:07:53.552 "memory_domains": [ 00:07:53.552 { 00:07:53.552 "dma_device_id": "system", 00:07:53.553 "dma_device_type": 1 00:07:53.553 }, 00:07:53.553 { 00:07:53.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:53.553 "dma_device_type": 2 00:07:53.553 }, 00:07:53.553 { 00:07:53.553 "dma_device_id": "system", 00:07:53.553 "dma_device_type": 1 00:07:53.553 }, 00:07:53.553 { 00:07:53.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:53.553 "dma_device_type": 2 00:07:53.553 } 00:07:53.553 ], 00:07:53.553 "driver_specific": { 00:07:53.553 "raid": { 00:07:53.553 "uuid": "27bd7f39-e55c-4b30-a8e0-0d15c4f93cef", 00:07:53.553 "strip_size_kb": 64, 00:07:53.553 "state": "online", 00:07:53.553 "raid_level": "concat", 00:07:53.553 "superblock": true, 00:07:53.553 "num_base_bdevs": 2, 00:07:53.553 "num_base_bdevs_discovered": 2, 00:07:53.553 "num_base_bdevs_operational": 2, 00:07:53.553 "base_bdevs_list": [ 00:07:53.553 { 00:07:53.553 "name": "BaseBdev1", 00:07:53.553 "uuid": "6f8c01b4-908b-4905-ad73-f65e9984ac5f", 00:07:53.553 "is_configured": true, 00:07:53.553 "data_offset": 2048, 00:07:53.553 "data_size": 63488 00:07:53.553 }, 00:07:53.553 { 00:07:53.553 "name": "BaseBdev2", 00:07:53.553 "uuid": "be7f3f61-53ec-4f89-ba78-f5456efd9edf", 00:07:53.553 "is_configured": true, 00:07:53.553 "data_offset": 2048, 00:07:53.553 "data_size": 63488 00:07:53.553 } 00:07:53.553 ] 00:07:53.553 } 00:07:53.553 } 00:07:53.553 }' 00:07:53.553 11:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:53.553 11:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:53.553 BaseBdev2' 00:07:53.553 11:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:53.553 11:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:53.553 11:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:53.553 11:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:53.553 11:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:53.553 11:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.553 11:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.553 11:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.553 11:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:53.553 11:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:53.553 11:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:53.553 11:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:53.553 11:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.553 11:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.553 11:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:53.553 11:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.553 11:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:53.553 11:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:53.553 11:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:53.553 11:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.553 11:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.553 [2024-11-19 11:59:56.888151] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:53.553 [2024-11-19 11:59:56.888191] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:53.553 [2024-11-19 11:59:56.888256] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:53.812 11:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.812 11:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:53.812 11:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:53.812 11:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:53.812 11:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:53.812 11:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:53.812 11:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:53.812 11:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:53.812 11:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:53.812 11:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:53.812 11:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:53.812 11:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:53.812 11:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:53.812 11:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:53.812 11:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:53.812 11:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:53.812 11:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:53.812 11:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.812 11:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.812 11:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.812 11:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.812 11:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:53.812 "name": "Existed_Raid", 00:07:53.812 "uuid": "27bd7f39-e55c-4b30-a8e0-0d15c4f93cef", 00:07:53.812 "strip_size_kb": 64, 00:07:53.812 "state": "offline", 00:07:53.812 "raid_level": "concat", 00:07:53.812 "superblock": true, 00:07:53.812 "num_base_bdevs": 2, 00:07:53.812 "num_base_bdevs_discovered": 1, 00:07:53.812 "num_base_bdevs_operational": 1, 00:07:53.812 "base_bdevs_list": [ 00:07:53.812 { 00:07:53.812 "name": null, 00:07:53.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:53.812 "is_configured": false, 00:07:53.812 "data_offset": 0, 00:07:53.812 "data_size": 63488 00:07:53.812 }, 00:07:53.812 { 00:07:53.812 "name": "BaseBdev2", 00:07:53.812 "uuid": "be7f3f61-53ec-4f89-ba78-f5456efd9edf", 00:07:53.812 "is_configured": true, 00:07:53.812 "data_offset": 2048, 00:07:53.812 "data_size": 63488 00:07:53.812 } 00:07:53.812 ] 00:07:53.812 }' 00:07:53.812 11:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:53.812 11:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.072 11:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:54.072 11:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:54.072 11:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:54.072 11:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.072 11:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.072 11:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.072 11:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.072 11:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:54.072 11:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:54.072 11:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:54.072 11:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.072 11:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.072 [2024-11-19 11:59:57.421331] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:54.072 [2024-11-19 11:59:57.421445] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:54.331 11:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.331 11:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:54.331 11:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:54.331 11:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.331 11:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:54.331 11:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.331 11:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.331 11:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.331 11:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:54.331 11:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:54.331 11:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:54.331 11:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62005 00:07:54.331 11:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 62005 ']' 00:07:54.331 11:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 62005 00:07:54.331 11:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:54.331 11:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:54.331 11:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62005 00:07:54.331 11:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:54.331 11:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:54.331 11:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62005' 00:07:54.331 killing process with pid 62005 00:07:54.331 11:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 62005 00:07:54.331 [2024-11-19 11:59:57.623416] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:54.331 11:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 62005 00:07:54.331 [2024-11-19 11:59:57.642178] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:55.710 11:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:55.710 00:07:55.710 real 0m5.182s 00:07:55.710 user 0m7.346s 00:07:55.710 sys 0m0.886s 00:07:55.710 11:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:55.710 11:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.710 ************************************ 00:07:55.710 END TEST raid_state_function_test_sb 00:07:55.710 ************************************ 00:07:55.710 11:59:58 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:07:55.710 11:59:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:55.710 11:59:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:55.710 11:59:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:55.710 ************************************ 00:07:55.710 START TEST raid_superblock_test 00:07:55.710 ************************************ 00:07:55.710 11:59:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:07:55.710 11:59:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:07:55.710 11:59:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:55.710 11:59:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:55.710 11:59:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:55.710 11:59:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:55.710 11:59:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:55.710 11:59:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:55.710 11:59:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:55.710 11:59:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:55.710 11:59:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:55.710 11:59:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:55.710 11:59:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:55.710 11:59:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:55.710 11:59:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:07:55.710 11:59:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:55.710 11:59:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:55.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:55.710 11:59:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62263 00:07:55.710 11:59:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62263 00:07:55.710 11:59:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:55.710 11:59:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 62263 ']' 00:07:55.710 11:59:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:55.710 11:59:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:55.710 11:59:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:55.710 11:59:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:55.710 11:59:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.710 [2024-11-19 11:59:59.027119] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:07:55.710 [2024-11-19 11:59:59.027363] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62263 ] 00:07:55.968 [2024-11-19 11:59:59.206947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.227 [2024-11-19 11:59:59.348569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.227 [2024-11-19 11:59:59.587410] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:56.227 [2024-11-19 11:59:59.587606] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:56.486 11:59:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:56.486 11:59:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:56.486 11:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:56.486 11:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:56.486 11:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:56.486 11:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:56.486 11:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:56.486 11:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:56.486 11:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:56.486 11:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:56.486 11:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:56.486 11:59:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.486 11:59:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.746 malloc1 00:07:56.746 11:59:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.746 11:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:56.746 11:59:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.746 11:59:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.746 [2024-11-19 11:59:59.905862] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:56.746 [2024-11-19 11:59:59.905942] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:56.746 [2024-11-19 11:59:59.905973] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:56.746 [2024-11-19 11:59:59.905983] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:56.746 [2024-11-19 11:59:59.908881] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:56.746 [2024-11-19 11:59:59.909013] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:56.746 pt1 00:07:56.746 11:59:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.746 11:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:56.746 11:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:56.746 11:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:56.746 11:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:56.746 11:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:56.746 11:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:56.746 11:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:56.746 11:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:56.746 11:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:56.746 11:59:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.746 11:59:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.746 malloc2 00:07:56.746 11:59:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.746 11:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:56.746 11:59:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.746 11:59:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.746 [2024-11-19 11:59:59.969350] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:56.746 [2024-11-19 11:59:59.969459] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:56.746 [2024-11-19 11:59:59.969504] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:56.746 [2024-11-19 11:59:59.969536] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:56.746 [2024-11-19 11:59:59.972057] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:56.746 [2024-11-19 11:59:59.972132] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:56.746 pt2 00:07:56.746 11:59:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.746 11:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:56.746 11:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:56.746 11:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:56.746 11:59:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.746 11:59:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.746 [2024-11-19 11:59:59.981392] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:56.746 [2024-11-19 11:59:59.983569] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:56.746 [2024-11-19 11:59:59.983798] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:56.746 [2024-11-19 11:59:59.983846] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:56.746 [2024-11-19 11:59:59.984130] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:56.746 [2024-11-19 11:59:59.984324] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:56.746 [2024-11-19 11:59:59.984370] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:56.746 [2024-11-19 11:59:59.984550] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:56.746 11:59:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.746 11:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:56.746 11:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:56.746 11:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:56.746 11:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:56.746 11:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:56.746 11:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:56.746 11:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:56.746 11:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:56.746 11:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:56.746 11:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:56.746 11:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:56.746 11:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.746 11:59:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.746 11:59:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.746 12:00:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.746 12:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:56.746 "name": "raid_bdev1", 00:07:56.746 "uuid": "d61d885e-c187-4f2f-99f1-adcc198fa7c7", 00:07:56.746 "strip_size_kb": 64, 00:07:56.746 "state": "online", 00:07:56.746 "raid_level": "concat", 00:07:56.746 "superblock": true, 00:07:56.746 "num_base_bdevs": 2, 00:07:56.746 "num_base_bdevs_discovered": 2, 00:07:56.746 "num_base_bdevs_operational": 2, 00:07:56.746 "base_bdevs_list": [ 00:07:56.746 { 00:07:56.746 "name": "pt1", 00:07:56.746 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:56.746 "is_configured": true, 00:07:56.746 "data_offset": 2048, 00:07:56.746 "data_size": 63488 00:07:56.746 }, 00:07:56.746 { 00:07:56.746 "name": "pt2", 00:07:56.746 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:56.746 "is_configured": true, 00:07:56.746 "data_offset": 2048, 00:07:56.746 "data_size": 63488 00:07:56.746 } 00:07:56.746 ] 00:07:56.746 }' 00:07:56.746 12:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:56.746 12:00:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.006 12:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:57.006 12:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:57.006 12:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:57.006 12:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:57.006 12:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:57.006 12:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:57.006 12:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:57.006 12:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:57.006 12:00:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.006 12:00:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.006 [2024-11-19 12:00:00.369155] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:57.265 12:00:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.265 12:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:57.265 "name": "raid_bdev1", 00:07:57.265 "aliases": [ 00:07:57.265 "d61d885e-c187-4f2f-99f1-adcc198fa7c7" 00:07:57.265 ], 00:07:57.265 "product_name": "Raid Volume", 00:07:57.265 "block_size": 512, 00:07:57.265 "num_blocks": 126976, 00:07:57.265 "uuid": "d61d885e-c187-4f2f-99f1-adcc198fa7c7", 00:07:57.265 "assigned_rate_limits": { 00:07:57.265 "rw_ios_per_sec": 0, 00:07:57.265 "rw_mbytes_per_sec": 0, 00:07:57.265 "r_mbytes_per_sec": 0, 00:07:57.265 "w_mbytes_per_sec": 0 00:07:57.265 }, 00:07:57.265 "claimed": false, 00:07:57.265 "zoned": false, 00:07:57.265 "supported_io_types": { 00:07:57.265 "read": true, 00:07:57.265 "write": true, 00:07:57.265 "unmap": true, 00:07:57.265 "flush": true, 00:07:57.265 "reset": true, 00:07:57.265 "nvme_admin": false, 00:07:57.265 "nvme_io": false, 00:07:57.265 "nvme_io_md": false, 00:07:57.265 "write_zeroes": true, 00:07:57.266 "zcopy": false, 00:07:57.266 "get_zone_info": false, 00:07:57.266 "zone_management": false, 00:07:57.266 "zone_append": false, 00:07:57.266 "compare": false, 00:07:57.266 "compare_and_write": false, 00:07:57.266 "abort": false, 00:07:57.266 "seek_hole": false, 00:07:57.266 "seek_data": false, 00:07:57.266 "copy": false, 00:07:57.266 "nvme_iov_md": false 00:07:57.266 }, 00:07:57.266 "memory_domains": [ 00:07:57.266 { 00:07:57.266 "dma_device_id": "system", 00:07:57.266 "dma_device_type": 1 00:07:57.266 }, 00:07:57.266 { 00:07:57.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:57.266 "dma_device_type": 2 00:07:57.266 }, 00:07:57.266 { 00:07:57.266 "dma_device_id": "system", 00:07:57.266 "dma_device_type": 1 00:07:57.266 }, 00:07:57.266 { 00:07:57.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:57.266 "dma_device_type": 2 00:07:57.266 } 00:07:57.266 ], 00:07:57.266 "driver_specific": { 00:07:57.266 "raid": { 00:07:57.266 "uuid": "d61d885e-c187-4f2f-99f1-adcc198fa7c7", 00:07:57.266 "strip_size_kb": 64, 00:07:57.266 "state": "online", 00:07:57.266 "raid_level": "concat", 00:07:57.266 "superblock": true, 00:07:57.266 "num_base_bdevs": 2, 00:07:57.266 "num_base_bdevs_discovered": 2, 00:07:57.266 "num_base_bdevs_operational": 2, 00:07:57.266 "base_bdevs_list": [ 00:07:57.266 { 00:07:57.266 "name": "pt1", 00:07:57.266 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:57.266 "is_configured": true, 00:07:57.266 "data_offset": 2048, 00:07:57.266 "data_size": 63488 00:07:57.266 }, 00:07:57.266 { 00:07:57.266 "name": "pt2", 00:07:57.266 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:57.266 "is_configured": true, 00:07:57.266 "data_offset": 2048, 00:07:57.266 "data_size": 63488 00:07:57.266 } 00:07:57.266 ] 00:07:57.266 } 00:07:57.266 } 00:07:57.266 }' 00:07:57.266 12:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:57.266 12:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:57.266 pt2' 00:07:57.266 12:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:57.266 12:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:57.266 12:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:57.266 12:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:57.266 12:00:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.266 12:00:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.266 12:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:57.266 12:00:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.266 12:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:57.266 12:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:57.266 12:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:57.266 12:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:57.266 12:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:57.266 12:00:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.266 12:00:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.266 12:00:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.266 12:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:57.266 12:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:57.266 12:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:57.266 12:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:57.266 12:00:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.266 12:00:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.266 [2024-11-19 12:00:00.596610] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:57.266 12:00:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.266 12:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d61d885e-c187-4f2f-99f1-adcc198fa7c7 00:07:57.266 12:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z d61d885e-c187-4f2f-99f1-adcc198fa7c7 ']' 00:07:57.266 12:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:57.266 12:00:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.266 12:00:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.266 [2024-11-19 12:00:00.624227] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:57.266 [2024-11-19 12:00:00.624308] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:57.266 [2024-11-19 12:00:00.624446] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:57.266 [2024-11-19 12:00:00.624533] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:57.266 [2024-11-19 12:00:00.624602] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:57.266 12:00:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.266 12:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:57.266 12:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.266 12:00:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.266 12:00:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.526 12:00:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.526 12:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:57.526 12:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:57.526 12:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:57.526 12:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:57.526 12:00:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.526 12:00:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.526 12:00:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.526 12:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:57.526 12:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:57.526 12:00:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.526 12:00:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.526 12:00:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.526 12:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:57.526 12:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:57.526 12:00:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.526 12:00:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.526 12:00:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.526 12:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:57.526 12:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:57.526 12:00:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:57.526 12:00:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:57.526 12:00:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:57.526 12:00:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:57.526 12:00:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:57.526 12:00:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:57.526 12:00:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:57.526 12:00:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.526 12:00:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.526 [2024-11-19 12:00:00.752153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:57.526 [2024-11-19 12:00:00.754439] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:57.527 [2024-11-19 12:00:00.754518] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:57.527 [2024-11-19 12:00:00.754586] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:57.527 [2024-11-19 12:00:00.754603] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:57.527 [2024-11-19 12:00:00.754615] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:57.527 request: 00:07:57.527 { 00:07:57.527 "name": "raid_bdev1", 00:07:57.527 "raid_level": "concat", 00:07:57.527 "base_bdevs": [ 00:07:57.527 "malloc1", 00:07:57.527 "malloc2" 00:07:57.527 ], 00:07:57.527 "strip_size_kb": 64, 00:07:57.527 "superblock": false, 00:07:57.527 "method": "bdev_raid_create", 00:07:57.527 "req_id": 1 00:07:57.527 } 00:07:57.527 Got JSON-RPC error response 00:07:57.527 response: 00:07:57.527 { 00:07:57.527 "code": -17, 00:07:57.527 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:57.527 } 00:07:57.527 12:00:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:57.527 12:00:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:57.527 12:00:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:57.527 12:00:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:57.527 12:00:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:57.527 12:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.527 12:00:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.527 12:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:57.527 12:00:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.527 12:00:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.527 12:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:57.527 12:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:57.527 12:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:57.527 12:00:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.527 12:00:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.527 [2024-11-19 12:00:00.819950] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:57.527 [2024-11-19 12:00:00.820086] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:57.527 [2024-11-19 12:00:00.820132] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:57.527 [2024-11-19 12:00:00.820191] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:57.527 [2024-11-19 12:00:00.822849] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:57.527 [2024-11-19 12:00:00.822930] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:57.527 [2024-11-19 12:00:00.823079] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:57.527 [2024-11-19 12:00:00.823196] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:57.527 pt1 00:07:57.527 12:00:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.527 12:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:07:57.527 12:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:57.527 12:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:57.527 12:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:57.527 12:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:57.527 12:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:57.527 12:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:57.527 12:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:57.527 12:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:57.527 12:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:57.527 12:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.527 12:00:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.527 12:00:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.527 12:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:57.527 12:00:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.527 12:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:57.527 "name": "raid_bdev1", 00:07:57.527 "uuid": "d61d885e-c187-4f2f-99f1-adcc198fa7c7", 00:07:57.527 "strip_size_kb": 64, 00:07:57.527 "state": "configuring", 00:07:57.527 "raid_level": "concat", 00:07:57.527 "superblock": true, 00:07:57.527 "num_base_bdevs": 2, 00:07:57.527 "num_base_bdevs_discovered": 1, 00:07:57.527 "num_base_bdevs_operational": 2, 00:07:57.527 "base_bdevs_list": [ 00:07:57.527 { 00:07:57.527 "name": "pt1", 00:07:57.527 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:57.527 "is_configured": true, 00:07:57.527 "data_offset": 2048, 00:07:57.527 "data_size": 63488 00:07:57.527 }, 00:07:57.527 { 00:07:57.527 "name": null, 00:07:57.527 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:57.527 "is_configured": false, 00:07:57.527 "data_offset": 2048, 00:07:57.527 "data_size": 63488 00:07:57.527 } 00:07:57.527 ] 00:07:57.527 }' 00:07:57.527 12:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:57.527 12:00:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.094 12:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:58.094 12:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:58.094 12:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:58.094 12:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:58.094 12:00:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.094 12:00:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.094 [2024-11-19 12:00:01.243264] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:58.094 [2024-11-19 12:00:01.243440] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:58.094 [2024-11-19 12:00:01.243490] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:58.094 [2024-11-19 12:00:01.243548] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:58.094 [2024-11-19 12:00:01.244164] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:58.094 [2024-11-19 12:00:01.244231] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:58.094 [2024-11-19 12:00:01.244362] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:58.094 [2024-11-19 12:00:01.244424] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:58.094 [2024-11-19 12:00:01.244603] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:58.094 [2024-11-19 12:00:01.244645] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:58.094 [2024-11-19 12:00:01.244931] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:58.094 [2024-11-19 12:00:01.245152] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:58.094 [2024-11-19 12:00:01.245196] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:58.094 [2024-11-19 12:00:01.245390] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:58.094 pt2 00:07:58.094 12:00:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.094 12:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:58.094 12:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:58.094 12:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:58.094 12:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:58.094 12:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:58.094 12:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:58.094 12:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:58.094 12:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:58.094 12:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:58.094 12:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:58.094 12:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:58.094 12:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:58.094 12:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.094 12:00:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.094 12:00:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.094 12:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:58.094 12:00:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.094 12:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:58.094 "name": "raid_bdev1", 00:07:58.094 "uuid": "d61d885e-c187-4f2f-99f1-adcc198fa7c7", 00:07:58.094 "strip_size_kb": 64, 00:07:58.094 "state": "online", 00:07:58.094 "raid_level": "concat", 00:07:58.094 "superblock": true, 00:07:58.094 "num_base_bdevs": 2, 00:07:58.094 "num_base_bdevs_discovered": 2, 00:07:58.094 "num_base_bdevs_operational": 2, 00:07:58.094 "base_bdevs_list": [ 00:07:58.094 { 00:07:58.094 "name": "pt1", 00:07:58.094 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:58.094 "is_configured": true, 00:07:58.094 "data_offset": 2048, 00:07:58.094 "data_size": 63488 00:07:58.094 }, 00:07:58.094 { 00:07:58.094 "name": "pt2", 00:07:58.094 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:58.094 "is_configured": true, 00:07:58.094 "data_offset": 2048, 00:07:58.094 "data_size": 63488 00:07:58.094 } 00:07:58.094 ] 00:07:58.094 }' 00:07:58.094 12:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:58.094 12:00:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.353 12:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:58.353 12:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:58.353 12:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:58.353 12:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:58.353 12:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:58.353 12:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:58.353 12:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:58.353 12:00:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.353 12:00:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.353 12:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:58.353 [2024-11-19 12:00:01.702854] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:58.353 12:00:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.612 12:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:58.612 "name": "raid_bdev1", 00:07:58.612 "aliases": [ 00:07:58.612 "d61d885e-c187-4f2f-99f1-adcc198fa7c7" 00:07:58.612 ], 00:07:58.612 "product_name": "Raid Volume", 00:07:58.612 "block_size": 512, 00:07:58.612 "num_blocks": 126976, 00:07:58.612 "uuid": "d61d885e-c187-4f2f-99f1-adcc198fa7c7", 00:07:58.612 "assigned_rate_limits": { 00:07:58.612 "rw_ios_per_sec": 0, 00:07:58.612 "rw_mbytes_per_sec": 0, 00:07:58.612 "r_mbytes_per_sec": 0, 00:07:58.612 "w_mbytes_per_sec": 0 00:07:58.612 }, 00:07:58.612 "claimed": false, 00:07:58.612 "zoned": false, 00:07:58.612 "supported_io_types": { 00:07:58.612 "read": true, 00:07:58.612 "write": true, 00:07:58.612 "unmap": true, 00:07:58.612 "flush": true, 00:07:58.612 "reset": true, 00:07:58.612 "nvme_admin": false, 00:07:58.612 "nvme_io": false, 00:07:58.612 "nvme_io_md": false, 00:07:58.612 "write_zeroes": true, 00:07:58.612 "zcopy": false, 00:07:58.612 "get_zone_info": false, 00:07:58.612 "zone_management": false, 00:07:58.612 "zone_append": false, 00:07:58.612 "compare": false, 00:07:58.612 "compare_and_write": false, 00:07:58.612 "abort": false, 00:07:58.612 "seek_hole": false, 00:07:58.612 "seek_data": false, 00:07:58.612 "copy": false, 00:07:58.612 "nvme_iov_md": false 00:07:58.612 }, 00:07:58.612 "memory_domains": [ 00:07:58.612 { 00:07:58.612 "dma_device_id": "system", 00:07:58.612 "dma_device_type": 1 00:07:58.612 }, 00:07:58.612 { 00:07:58.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.612 "dma_device_type": 2 00:07:58.612 }, 00:07:58.612 { 00:07:58.612 "dma_device_id": "system", 00:07:58.612 "dma_device_type": 1 00:07:58.612 }, 00:07:58.612 { 00:07:58.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.612 "dma_device_type": 2 00:07:58.612 } 00:07:58.612 ], 00:07:58.612 "driver_specific": { 00:07:58.612 "raid": { 00:07:58.612 "uuid": "d61d885e-c187-4f2f-99f1-adcc198fa7c7", 00:07:58.612 "strip_size_kb": 64, 00:07:58.612 "state": "online", 00:07:58.612 "raid_level": "concat", 00:07:58.612 "superblock": true, 00:07:58.612 "num_base_bdevs": 2, 00:07:58.612 "num_base_bdevs_discovered": 2, 00:07:58.612 "num_base_bdevs_operational": 2, 00:07:58.612 "base_bdevs_list": [ 00:07:58.612 { 00:07:58.612 "name": "pt1", 00:07:58.612 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:58.612 "is_configured": true, 00:07:58.612 "data_offset": 2048, 00:07:58.612 "data_size": 63488 00:07:58.612 }, 00:07:58.612 { 00:07:58.612 "name": "pt2", 00:07:58.612 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:58.612 "is_configured": true, 00:07:58.612 "data_offset": 2048, 00:07:58.612 "data_size": 63488 00:07:58.612 } 00:07:58.613 ] 00:07:58.613 } 00:07:58.613 } 00:07:58.613 }' 00:07:58.613 12:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:58.613 12:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:58.613 pt2' 00:07:58.613 12:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:58.613 12:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:58.613 12:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:58.613 12:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:58.613 12:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:58.613 12:00:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.613 12:00:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.613 12:00:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.613 12:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:58.613 12:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:58.613 12:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:58.613 12:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:58.613 12:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:58.613 12:00:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.613 12:00:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.613 12:00:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.613 12:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:58.613 12:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:58.613 12:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:58.613 12:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:58.613 12:00:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.613 12:00:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.613 [2024-11-19 12:00:01.938461] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:58.613 12:00:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.613 12:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' d61d885e-c187-4f2f-99f1-adcc198fa7c7 '!=' d61d885e-c187-4f2f-99f1-adcc198fa7c7 ']' 00:07:58.613 12:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:07:58.613 12:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:58.613 12:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:58.613 12:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62263 00:07:58.613 12:00:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 62263 ']' 00:07:58.613 12:00:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 62263 00:07:58.613 12:00:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:58.613 12:00:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:58.613 12:00:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62263 00:07:58.872 12:00:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:58.872 12:00:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:58.872 12:00:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62263' 00:07:58.872 killing process with pid 62263 00:07:58.872 12:00:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 62263 00:07:58.872 [2024-11-19 12:00:02.007190] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:58.872 [2024-11-19 12:00:02.007372] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:58.872 12:00:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 62263 00:07:58.872 [2024-11-19 12:00:02.007467] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:58.872 [2024-11-19 12:00:02.007518] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:58.872 [2024-11-19 12:00:02.232342] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:00.250 12:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:00.250 00:08:00.250 real 0m4.518s 00:08:00.250 user 0m6.143s 00:08:00.250 sys 0m0.837s 00:08:00.250 12:00:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:00.250 ************************************ 00:08:00.250 END TEST raid_superblock_test 00:08:00.250 ************************************ 00:08:00.250 12:00:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.250 12:00:03 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:08:00.250 12:00:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:00.250 12:00:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:00.250 12:00:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:00.250 ************************************ 00:08:00.250 START TEST raid_read_error_test 00:08:00.250 ************************************ 00:08:00.250 12:00:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:08:00.250 12:00:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:00.250 12:00:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:00.250 12:00:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:00.250 12:00:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:00.250 12:00:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:00.250 12:00:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:00.250 12:00:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:00.250 12:00:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:00.250 12:00:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:00.250 12:00:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:00.250 12:00:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:00.250 12:00:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:00.250 12:00:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:00.250 12:00:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:00.250 12:00:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:00.250 12:00:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:00.250 12:00:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:00.250 12:00:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:00.250 12:00:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:00.250 12:00:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:00.250 12:00:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:00.250 12:00:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:00.250 12:00:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ByJWjeD36f 00:08:00.250 12:00:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62469 00:08:00.250 12:00:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:00.250 12:00:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62469 00:08:00.250 12:00:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 62469 ']' 00:08:00.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.250 12:00:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.250 12:00:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:00.250 12:00:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.250 12:00:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:00.250 12:00:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.250 [2024-11-19 12:00:03.612256] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:08:00.250 [2024-11-19 12:00:03.612401] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62469 ] 00:08:00.509 [2024-11-19 12:00:03.792049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.768 [2024-11-19 12:00:03.955114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.026 [2024-11-19 12:00:04.253531] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:01.026 [2024-11-19 12:00:04.253585] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:01.285 12:00:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:01.285 12:00:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:01.285 12:00:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:01.285 12:00:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:01.285 12:00:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.285 12:00:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.285 BaseBdev1_malloc 00:08:01.285 12:00:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.285 12:00:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:01.285 12:00:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.285 12:00:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.285 true 00:08:01.285 12:00:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.285 12:00:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:01.285 12:00:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.285 12:00:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.285 [2024-11-19 12:00:04.562990] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:01.285 [2024-11-19 12:00:04.563158] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:01.285 [2024-11-19 12:00:04.563223] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:01.285 [2024-11-19 12:00:04.563271] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:01.285 [2024-11-19 12:00:04.566278] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:01.285 [2024-11-19 12:00:04.566368] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:01.285 BaseBdev1 00:08:01.285 12:00:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.285 12:00:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:01.285 12:00:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:01.285 12:00:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.285 12:00:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.285 BaseBdev2_malloc 00:08:01.285 12:00:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.285 12:00:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:01.285 12:00:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.285 12:00:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.285 true 00:08:01.285 12:00:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.285 12:00:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:01.285 12:00:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.285 12:00:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.285 [2024-11-19 12:00:04.645883] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:01.285 [2024-11-19 12:00:04.645955] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:01.285 [2024-11-19 12:00:04.645976] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:01.285 [2024-11-19 12:00:04.645990] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:01.285 [2024-11-19 12:00:04.648784] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:01.285 [2024-11-19 12:00:04.648883] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:01.285 BaseBdev2 00:08:01.285 12:00:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.285 12:00:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:01.285 12:00:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.285 12:00:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.285 [2024-11-19 12:00:04.657948] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:01.544 [2024-11-19 12:00:04.660416] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:01.544 [2024-11-19 12:00:04.660659] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:01.544 [2024-11-19 12:00:04.660678] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:01.544 [2024-11-19 12:00:04.660974] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:01.544 [2024-11-19 12:00:04.661225] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:01.544 [2024-11-19 12:00:04.661242] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:01.544 [2024-11-19 12:00:04.661445] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:01.544 12:00:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.544 12:00:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:01.544 12:00:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:01.544 12:00:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:01.544 12:00:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:01.544 12:00:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:01.544 12:00:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:01.544 12:00:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:01.544 12:00:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:01.544 12:00:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:01.544 12:00:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:01.544 12:00:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.544 12:00:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.544 12:00:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.544 12:00:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:01.544 12:00:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.544 12:00:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:01.544 "name": "raid_bdev1", 00:08:01.544 "uuid": "8650e4cb-e65d-4b50-9b01-1f3225a444f4", 00:08:01.544 "strip_size_kb": 64, 00:08:01.544 "state": "online", 00:08:01.544 "raid_level": "concat", 00:08:01.544 "superblock": true, 00:08:01.544 "num_base_bdevs": 2, 00:08:01.544 "num_base_bdevs_discovered": 2, 00:08:01.544 "num_base_bdevs_operational": 2, 00:08:01.544 "base_bdevs_list": [ 00:08:01.544 { 00:08:01.544 "name": "BaseBdev1", 00:08:01.544 "uuid": "06cfb0ff-2d0e-56d0-ac4f-098ab1034ed9", 00:08:01.544 "is_configured": true, 00:08:01.544 "data_offset": 2048, 00:08:01.544 "data_size": 63488 00:08:01.544 }, 00:08:01.544 { 00:08:01.544 "name": "BaseBdev2", 00:08:01.544 "uuid": "11c04229-00f0-56da-8ab5-977d64f8a4aa", 00:08:01.544 "is_configured": true, 00:08:01.544 "data_offset": 2048, 00:08:01.544 "data_size": 63488 00:08:01.544 } 00:08:01.544 ] 00:08:01.544 }' 00:08:01.544 12:00:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:01.544 12:00:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.801 12:00:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:01.801 12:00:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:02.060 [2024-11-19 12:00:05.258733] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:02.997 12:00:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:02.997 12:00:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.997 12:00:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.997 12:00:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.997 12:00:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:02.997 12:00:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:02.997 12:00:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:02.997 12:00:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:02.997 12:00:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:02.997 12:00:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:02.997 12:00:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:02.997 12:00:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:02.997 12:00:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:02.997 12:00:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.997 12:00:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.997 12:00:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.997 12:00:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.997 12:00:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.997 12:00:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:02.997 12:00:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.997 12:00:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.997 12:00:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.997 12:00:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.997 "name": "raid_bdev1", 00:08:02.997 "uuid": "8650e4cb-e65d-4b50-9b01-1f3225a444f4", 00:08:02.997 "strip_size_kb": 64, 00:08:02.997 "state": "online", 00:08:02.997 "raid_level": "concat", 00:08:02.997 "superblock": true, 00:08:02.997 "num_base_bdevs": 2, 00:08:02.997 "num_base_bdevs_discovered": 2, 00:08:02.997 "num_base_bdevs_operational": 2, 00:08:02.997 "base_bdevs_list": [ 00:08:02.997 { 00:08:02.997 "name": "BaseBdev1", 00:08:02.997 "uuid": "06cfb0ff-2d0e-56d0-ac4f-098ab1034ed9", 00:08:02.997 "is_configured": true, 00:08:02.997 "data_offset": 2048, 00:08:02.997 "data_size": 63488 00:08:02.997 }, 00:08:02.997 { 00:08:02.997 "name": "BaseBdev2", 00:08:02.997 "uuid": "11c04229-00f0-56da-8ab5-977d64f8a4aa", 00:08:02.997 "is_configured": true, 00:08:02.997 "data_offset": 2048, 00:08:02.997 "data_size": 63488 00:08:02.997 } 00:08:02.997 ] 00:08:02.997 }' 00:08:02.997 12:00:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.997 12:00:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.257 12:00:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:03.257 12:00:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.257 12:00:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.257 [2024-11-19 12:00:06.572332] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:03.257 [2024-11-19 12:00:06.572380] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:03.257 [2024-11-19 12:00:06.575219] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:03.257 [2024-11-19 12:00:06.575312] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:03.257 [2024-11-19 12:00:06.575377] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:03.257 [2024-11-19 12:00:06.575435] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:03.257 { 00:08:03.257 "results": [ 00:08:03.257 { 00:08:03.257 "job": "raid_bdev1", 00:08:03.257 "core_mask": "0x1", 00:08:03.257 "workload": "randrw", 00:08:03.257 "percentage": 50, 00:08:03.257 "status": "finished", 00:08:03.257 "queue_depth": 1, 00:08:03.257 "io_size": 131072, 00:08:03.257 "runtime": 1.313567, 00:08:03.257 "iops": 13176.33588541734, 00:08:03.257 "mibps": 1647.0419856771675, 00:08:03.257 "io_failed": 1, 00:08:03.257 "io_timeout": 0, 00:08:03.257 "avg_latency_us": 106.48790903386961, 00:08:03.257 "min_latency_us": 26.047161572052403, 00:08:03.257 "max_latency_us": 1438.071615720524 00:08:03.257 } 00:08:03.257 ], 00:08:03.257 "core_count": 1 00:08:03.257 } 00:08:03.257 12:00:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.257 12:00:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62469 00:08:03.257 12:00:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 62469 ']' 00:08:03.257 12:00:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 62469 00:08:03.257 12:00:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:03.257 12:00:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:03.257 12:00:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62469 00:08:03.257 12:00:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:03.257 12:00:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:03.257 12:00:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62469' 00:08:03.257 killing process with pid 62469 00:08:03.257 12:00:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 62469 00:08:03.257 [2024-11-19 12:00:06.624444] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:03.257 12:00:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 62469 00:08:03.516 [2024-11-19 12:00:06.776608] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:04.899 12:00:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:04.899 12:00:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:04.899 12:00:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ByJWjeD36f 00:08:04.899 12:00:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.76 00:08:04.900 ************************************ 00:08:04.900 END TEST raid_read_error_test 00:08:04.900 ************************************ 00:08:04.900 12:00:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:04.900 12:00:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:04.900 12:00:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:04.900 12:00:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.76 != \0\.\0\0 ]] 00:08:04.900 00:08:04.900 real 0m4.569s 00:08:04.900 user 0m5.405s 00:08:04.900 sys 0m0.660s 00:08:04.900 12:00:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:04.900 12:00:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.900 12:00:08 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:08:04.900 12:00:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:04.900 12:00:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:04.900 12:00:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:04.900 ************************************ 00:08:04.900 START TEST raid_write_error_test 00:08:04.900 ************************************ 00:08:04.900 12:00:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:08:04.900 12:00:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:04.900 12:00:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:04.900 12:00:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:04.900 12:00:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:04.900 12:00:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:04.900 12:00:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:04.900 12:00:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:04.900 12:00:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:04.900 12:00:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:04.900 12:00:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:04.900 12:00:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:04.900 12:00:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:04.900 12:00:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:04.900 12:00:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:04.900 12:00:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:04.900 12:00:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:04.900 12:00:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:04.900 12:00:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:04.900 12:00:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:04.900 12:00:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:04.900 12:00:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:04.900 12:00:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:04.900 12:00:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.5MbVTFrqex 00:08:04.900 12:00:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62615 00:08:04.900 12:00:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:04.900 12:00:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62615 00:08:04.900 12:00:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 62615 ']' 00:08:04.900 12:00:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:04.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:04.900 12:00:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:04.900 12:00:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:04.900 12:00:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:04.900 12:00:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.900 [2024-11-19 12:00:08.256724] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:08:04.900 [2024-11-19 12:00:08.256932] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62615 ] 00:08:05.159 [2024-11-19 12:00:08.438554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.419 [2024-11-19 12:00:08.580616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.678 [2024-11-19 12:00:08.823153] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:05.678 [2024-11-19 12:00:08.823314] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:05.961 12:00:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:05.961 12:00:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:05.961 12:00:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:05.961 12:00:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:05.961 12:00:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.961 12:00:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.961 BaseBdev1_malloc 00:08:05.961 12:00:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.961 12:00:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:05.961 12:00:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.961 12:00:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.961 true 00:08:05.961 12:00:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.961 12:00:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:05.961 12:00:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.961 12:00:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.961 [2024-11-19 12:00:09.159174] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:05.961 [2024-11-19 12:00:09.159316] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:05.961 [2024-11-19 12:00:09.159346] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:05.961 [2024-11-19 12:00:09.159359] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:05.962 [2024-11-19 12:00:09.161853] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:05.962 [2024-11-19 12:00:09.161895] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:05.962 BaseBdev1 00:08:05.962 12:00:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.962 12:00:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:05.962 12:00:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:05.962 12:00:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.962 12:00:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.962 BaseBdev2_malloc 00:08:05.962 12:00:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.962 12:00:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:05.962 12:00:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.962 12:00:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.962 true 00:08:05.962 12:00:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.962 12:00:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:05.962 12:00:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.962 12:00:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.962 [2024-11-19 12:00:09.224063] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:05.962 [2024-11-19 12:00:09.224125] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:05.962 [2024-11-19 12:00:09.224142] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:05.962 [2024-11-19 12:00:09.224154] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:05.962 [2024-11-19 12:00:09.226488] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:05.962 [2024-11-19 12:00:09.226527] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:05.962 BaseBdev2 00:08:05.962 12:00:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.962 12:00:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:05.962 12:00:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.962 12:00:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.962 [2024-11-19 12:00:09.232115] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:05.962 [2024-11-19 12:00:09.234229] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:05.963 [2024-11-19 12:00:09.234432] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:05.963 [2024-11-19 12:00:09.234449] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:05.963 [2024-11-19 12:00:09.234685] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:05.963 [2024-11-19 12:00:09.234864] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:05.963 [2024-11-19 12:00:09.234877] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:05.963 [2024-11-19 12:00:09.235060] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:05.963 12:00:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.963 12:00:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:05.963 12:00:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:05.963 12:00:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:05.963 12:00:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:05.963 12:00:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:05.963 12:00:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:05.963 12:00:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:05.963 12:00:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:05.963 12:00:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:05.963 12:00:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.963 12:00:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.963 12:00:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:05.963 12:00:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.963 12:00:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.963 12:00:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.963 12:00:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:05.963 "name": "raid_bdev1", 00:08:05.963 "uuid": "c336c71f-ba36-41d0-9aa4-253425ff73bb", 00:08:05.963 "strip_size_kb": 64, 00:08:05.963 "state": "online", 00:08:05.963 "raid_level": "concat", 00:08:05.963 "superblock": true, 00:08:05.963 "num_base_bdevs": 2, 00:08:05.963 "num_base_bdevs_discovered": 2, 00:08:05.963 "num_base_bdevs_operational": 2, 00:08:05.963 "base_bdevs_list": [ 00:08:05.963 { 00:08:05.963 "name": "BaseBdev1", 00:08:05.963 "uuid": "71a34460-0f83-5b08-8716-132082690a65", 00:08:05.963 "is_configured": true, 00:08:05.963 "data_offset": 2048, 00:08:05.963 "data_size": 63488 00:08:05.963 }, 00:08:05.963 { 00:08:05.963 "name": "BaseBdev2", 00:08:05.963 "uuid": "ecb2c9ee-fd77-563f-bcdc-134b2dfbc622", 00:08:05.963 "is_configured": true, 00:08:05.963 "data_offset": 2048, 00:08:05.963 "data_size": 63488 00:08:05.963 } 00:08:05.963 ] 00:08:05.963 }' 00:08:05.963 12:00:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:05.963 12:00:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.533 12:00:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:06.533 12:00:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:06.533 [2024-11-19 12:00:09.788744] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:07.471 12:00:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:07.471 12:00:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.471 12:00:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.471 12:00:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.471 12:00:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:07.471 12:00:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:07.471 12:00:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:07.471 12:00:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:07.471 12:00:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:07.471 12:00:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:07.471 12:00:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:07.471 12:00:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:07.471 12:00:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:07.471 12:00:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:07.471 12:00:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:07.471 12:00:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:07.471 12:00:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:07.471 12:00:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.471 12:00:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:07.471 12:00:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.471 12:00:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.471 12:00:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.471 12:00:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:07.471 "name": "raid_bdev1", 00:08:07.471 "uuid": "c336c71f-ba36-41d0-9aa4-253425ff73bb", 00:08:07.471 "strip_size_kb": 64, 00:08:07.471 "state": "online", 00:08:07.471 "raid_level": "concat", 00:08:07.471 "superblock": true, 00:08:07.471 "num_base_bdevs": 2, 00:08:07.471 "num_base_bdevs_discovered": 2, 00:08:07.471 "num_base_bdevs_operational": 2, 00:08:07.471 "base_bdevs_list": [ 00:08:07.471 { 00:08:07.471 "name": "BaseBdev1", 00:08:07.471 "uuid": "71a34460-0f83-5b08-8716-132082690a65", 00:08:07.471 "is_configured": true, 00:08:07.471 "data_offset": 2048, 00:08:07.471 "data_size": 63488 00:08:07.471 }, 00:08:07.471 { 00:08:07.471 "name": "BaseBdev2", 00:08:07.471 "uuid": "ecb2c9ee-fd77-563f-bcdc-134b2dfbc622", 00:08:07.471 "is_configured": true, 00:08:07.471 "data_offset": 2048, 00:08:07.471 "data_size": 63488 00:08:07.471 } 00:08:07.471 ] 00:08:07.471 }' 00:08:07.471 12:00:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:07.471 12:00:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.040 12:00:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:08.040 12:00:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.040 12:00:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.040 [2024-11-19 12:00:11.137599] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:08.040 [2024-11-19 12:00:11.137747] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:08.040 [2024-11-19 12:00:11.140528] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:08.040 [2024-11-19 12:00:11.140572] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:08.040 [2024-11-19 12:00:11.140607] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:08.040 [2024-11-19 12:00:11.140624] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:08.040 12:00:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.040 { 00:08:08.040 "results": [ 00:08:08.040 { 00:08:08.040 "job": "raid_bdev1", 00:08:08.040 "core_mask": "0x1", 00:08:08.040 "workload": "randrw", 00:08:08.040 "percentage": 50, 00:08:08.040 "status": "finished", 00:08:08.040 "queue_depth": 1, 00:08:08.040 "io_size": 131072, 00:08:08.040 "runtime": 1.349398, 00:08:08.040 "iops": 14147.790347992215, 00:08:08.040 "mibps": 1768.473793499027, 00:08:08.040 "io_failed": 1, 00:08:08.040 "io_timeout": 0, 00:08:08.040 "avg_latency_us": 99.37131883584611, 00:08:08.040 "min_latency_us": 25.7117903930131, 00:08:08.040 "max_latency_us": 1402.2986899563318 00:08:08.040 } 00:08:08.040 ], 00:08:08.040 "core_count": 1 00:08:08.040 } 00:08:08.040 12:00:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62615 00:08:08.040 12:00:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 62615 ']' 00:08:08.040 12:00:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 62615 00:08:08.040 12:00:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:08.040 12:00:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:08.040 12:00:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62615 00:08:08.040 12:00:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:08.040 12:00:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:08.040 12:00:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62615' 00:08:08.040 killing process with pid 62615 00:08:08.040 12:00:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 62615 00:08:08.040 [2024-11-19 12:00:11.181023] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:08.040 12:00:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 62615 00:08:08.040 [2024-11-19 12:00:11.332147] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:09.421 12:00:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:09.421 12:00:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:09.421 12:00:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.5MbVTFrqex 00:08:09.421 12:00:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:08:09.421 12:00:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:09.421 12:00:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:09.421 12:00:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:09.421 12:00:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:08:09.421 00:08:09.421 real 0m4.463s 00:08:09.421 user 0m5.213s 00:08:09.421 sys 0m0.665s 00:08:09.421 12:00:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:09.421 12:00:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.421 ************************************ 00:08:09.421 END TEST raid_write_error_test 00:08:09.421 ************************************ 00:08:09.421 12:00:12 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:09.421 12:00:12 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:08:09.421 12:00:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:09.421 12:00:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:09.421 12:00:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:09.421 ************************************ 00:08:09.421 START TEST raid_state_function_test 00:08:09.421 ************************************ 00:08:09.421 12:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:08:09.421 12:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:09.421 12:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:09.421 12:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:09.421 12:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:09.421 12:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:09.421 12:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:09.421 12:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:09.421 12:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:09.421 12:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:09.421 12:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:09.421 12:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:09.421 12:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:09.421 12:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:09.421 12:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:09.421 12:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:09.421 12:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:09.421 12:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:09.421 12:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:09.421 12:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:09.421 12:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:09.421 12:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:09.421 12:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:09.421 12:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62758 00:08:09.421 12:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:09.421 12:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62758' 00:08:09.421 Process raid pid: 62758 00:08:09.421 12:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62758 00:08:09.421 12:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 62758 ']' 00:08:09.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.421 12:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.421 12:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:09.421 12:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.421 12:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:09.421 12:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.421 [2024-11-19 12:00:12.771269] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:08:09.421 [2024-11-19 12:00:12.771398] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:09.681 [2024-11-19 12:00:12.948428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.940 [2024-11-19 12:00:13.088702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.200 [2024-11-19 12:00:13.325241] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:10.200 [2024-11-19 12:00:13.325278] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:10.459 12:00:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:10.459 12:00:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:10.459 12:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:10.459 12:00:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.459 12:00:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.459 [2024-11-19 12:00:13.605384] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:10.459 [2024-11-19 12:00:13.605463] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:10.459 [2024-11-19 12:00:13.605474] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:10.459 [2024-11-19 12:00:13.605485] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:10.459 12:00:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.459 12:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:10.459 12:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:10.459 12:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:10.459 12:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:10.459 12:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:10.459 12:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:10.459 12:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.459 12:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.459 12:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.459 12:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.459 12:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:10.459 12:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.459 12:00:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.459 12:00:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.459 12:00:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.459 12:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.459 "name": "Existed_Raid", 00:08:10.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.459 "strip_size_kb": 0, 00:08:10.459 "state": "configuring", 00:08:10.459 "raid_level": "raid1", 00:08:10.459 "superblock": false, 00:08:10.459 "num_base_bdevs": 2, 00:08:10.459 "num_base_bdevs_discovered": 0, 00:08:10.459 "num_base_bdevs_operational": 2, 00:08:10.459 "base_bdevs_list": [ 00:08:10.459 { 00:08:10.459 "name": "BaseBdev1", 00:08:10.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.459 "is_configured": false, 00:08:10.459 "data_offset": 0, 00:08:10.459 "data_size": 0 00:08:10.459 }, 00:08:10.459 { 00:08:10.459 "name": "BaseBdev2", 00:08:10.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.459 "is_configured": false, 00:08:10.459 "data_offset": 0, 00:08:10.459 "data_size": 0 00:08:10.459 } 00:08:10.459 ] 00:08:10.459 }' 00:08:10.459 12:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.459 12:00:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.719 12:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:10.719 12:00:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.719 12:00:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.719 [2024-11-19 12:00:14.052586] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:10.719 [2024-11-19 12:00:14.052722] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:10.719 12:00:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.719 12:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:10.719 12:00:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.719 12:00:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.719 [2024-11-19 12:00:14.064527] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:10.719 [2024-11-19 12:00:14.064640] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:10.719 [2024-11-19 12:00:14.064669] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:10.719 [2024-11-19 12:00:14.064695] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:10.719 12:00:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.719 12:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:10.719 12:00:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.719 12:00:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.979 [2024-11-19 12:00:14.119352] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:10.979 BaseBdev1 00:08:10.979 12:00:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.979 12:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:10.979 12:00:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:10.979 12:00:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:10.979 12:00:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:10.979 12:00:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:10.979 12:00:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:10.979 12:00:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:10.979 12:00:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.979 12:00:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.979 12:00:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.979 12:00:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:10.979 12:00:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.979 12:00:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.979 [ 00:08:10.979 { 00:08:10.979 "name": "BaseBdev1", 00:08:10.979 "aliases": [ 00:08:10.979 "85bb60ba-6663-4f18-a789-3091eea7cf70" 00:08:10.979 ], 00:08:10.979 "product_name": "Malloc disk", 00:08:10.979 "block_size": 512, 00:08:10.979 "num_blocks": 65536, 00:08:10.979 "uuid": "85bb60ba-6663-4f18-a789-3091eea7cf70", 00:08:10.979 "assigned_rate_limits": { 00:08:10.979 "rw_ios_per_sec": 0, 00:08:10.979 "rw_mbytes_per_sec": 0, 00:08:10.979 "r_mbytes_per_sec": 0, 00:08:10.979 "w_mbytes_per_sec": 0 00:08:10.979 }, 00:08:10.979 "claimed": true, 00:08:10.979 "claim_type": "exclusive_write", 00:08:10.979 "zoned": false, 00:08:10.979 "supported_io_types": { 00:08:10.979 "read": true, 00:08:10.979 "write": true, 00:08:10.979 "unmap": true, 00:08:10.979 "flush": true, 00:08:10.979 "reset": true, 00:08:10.979 "nvme_admin": false, 00:08:10.979 "nvme_io": false, 00:08:10.979 "nvme_io_md": false, 00:08:10.979 "write_zeroes": true, 00:08:10.979 "zcopy": true, 00:08:10.979 "get_zone_info": false, 00:08:10.979 "zone_management": false, 00:08:10.979 "zone_append": false, 00:08:10.979 "compare": false, 00:08:10.979 "compare_and_write": false, 00:08:10.979 "abort": true, 00:08:10.979 "seek_hole": false, 00:08:10.979 "seek_data": false, 00:08:10.979 "copy": true, 00:08:10.979 "nvme_iov_md": false 00:08:10.979 }, 00:08:10.979 "memory_domains": [ 00:08:10.979 { 00:08:10.979 "dma_device_id": "system", 00:08:10.979 "dma_device_type": 1 00:08:10.979 }, 00:08:10.979 { 00:08:10.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:10.979 "dma_device_type": 2 00:08:10.979 } 00:08:10.979 ], 00:08:10.979 "driver_specific": {} 00:08:10.979 } 00:08:10.979 ] 00:08:10.979 12:00:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.979 12:00:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:10.979 12:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:10.979 12:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:10.979 12:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:10.979 12:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:10.979 12:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:10.979 12:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:10.979 12:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.979 12:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.979 12:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.979 12:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.979 12:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.979 12:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:10.979 12:00:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.979 12:00:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.979 12:00:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.979 12:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.979 "name": "Existed_Raid", 00:08:10.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.979 "strip_size_kb": 0, 00:08:10.979 "state": "configuring", 00:08:10.979 "raid_level": "raid1", 00:08:10.979 "superblock": false, 00:08:10.979 "num_base_bdevs": 2, 00:08:10.979 "num_base_bdevs_discovered": 1, 00:08:10.979 "num_base_bdevs_operational": 2, 00:08:10.979 "base_bdevs_list": [ 00:08:10.979 { 00:08:10.979 "name": "BaseBdev1", 00:08:10.979 "uuid": "85bb60ba-6663-4f18-a789-3091eea7cf70", 00:08:10.979 "is_configured": true, 00:08:10.979 "data_offset": 0, 00:08:10.979 "data_size": 65536 00:08:10.979 }, 00:08:10.979 { 00:08:10.979 "name": "BaseBdev2", 00:08:10.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.979 "is_configured": false, 00:08:10.979 "data_offset": 0, 00:08:10.979 "data_size": 0 00:08:10.979 } 00:08:10.979 ] 00:08:10.979 }' 00:08:10.979 12:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.979 12:00:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.548 12:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:11.548 12:00:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.548 12:00:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.548 [2024-11-19 12:00:14.642536] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:11.548 [2024-11-19 12:00:14.642610] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:11.548 12:00:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.548 12:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:11.548 12:00:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.548 12:00:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.549 [2024-11-19 12:00:14.654539] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:11.549 [2024-11-19 12:00:14.656768] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:11.549 [2024-11-19 12:00:14.656855] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:11.549 12:00:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.549 12:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:11.549 12:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:11.549 12:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:11.549 12:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:11.549 12:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:11.549 12:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:11.549 12:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:11.549 12:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:11.549 12:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.549 12:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.549 12:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.549 12:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.549 12:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.549 12:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:11.549 12:00:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.549 12:00:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.549 12:00:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.549 12:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.549 "name": "Existed_Raid", 00:08:11.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.549 "strip_size_kb": 0, 00:08:11.549 "state": "configuring", 00:08:11.549 "raid_level": "raid1", 00:08:11.549 "superblock": false, 00:08:11.549 "num_base_bdevs": 2, 00:08:11.549 "num_base_bdevs_discovered": 1, 00:08:11.549 "num_base_bdevs_operational": 2, 00:08:11.549 "base_bdevs_list": [ 00:08:11.549 { 00:08:11.549 "name": "BaseBdev1", 00:08:11.549 "uuid": "85bb60ba-6663-4f18-a789-3091eea7cf70", 00:08:11.549 "is_configured": true, 00:08:11.549 "data_offset": 0, 00:08:11.549 "data_size": 65536 00:08:11.549 }, 00:08:11.549 { 00:08:11.549 "name": "BaseBdev2", 00:08:11.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.549 "is_configured": false, 00:08:11.549 "data_offset": 0, 00:08:11.549 "data_size": 0 00:08:11.549 } 00:08:11.549 ] 00:08:11.549 }' 00:08:11.549 12:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.549 12:00:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.809 12:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:11.809 12:00:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.809 12:00:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.809 [2024-11-19 12:00:15.155408] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:11.809 [2024-11-19 12:00:15.155575] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:11.809 [2024-11-19 12:00:15.155589] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:11.809 [2024-11-19 12:00:15.155933] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:11.809 [2024-11-19 12:00:15.156156] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:11.809 [2024-11-19 12:00:15.156175] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:11.809 [2024-11-19 12:00:15.156509] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:11.809 BaseBdev2 00:08:11.809 12:00:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.809 12:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:11.809 12:00:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:11.809 12:00:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:11.809 12:00:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:11.809 12:00:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:11.809 12:00:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:11.809 12:00:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:11.809 12:00:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.809 12:00:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.809 12:00:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.809 12:00:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:11.809 12:00:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.809 12:00:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.069 [ 00:08:12.069 { 00:08:12.069 "name": "BaseBdev2", 00:08:12.069 "aliases": [ 00:08:12.069 "e412902a-1330-490d-a7e7-38b7857f017d" 00:08:12.069 ], 00:08:12.069 "product_name": "Malloc disk", 00:08:12.069 "block_size": 512, 00:08:12.069 "num_blocks": 65536, 00:08:12.069 "uuid": "e412902a-1330-490d-a7e7-38b7857f017d", 00:08:12.069 "assigned_rate_limits": { 00:08:12.069 "rw_ios_per_sec": 0, 00:08:12.069 "rw_mbytes_per_sec": 0, 00:08:12.069 "r_mbytes_per_sec": 0, 00:08:12.069 "w_mbytes_per_sec": 0 00:08:12.069 }, 00:08:12.069 "claimed": true, 00:08:12.069 "claim_type": "exclusive_write", 00:08:12.069 "zoned": false, 00:08:12.069 "supported_io_types": { 00:08:12.069 "read": true, 00:08:12.069 "write": true, 00:08:12.069 "unmap": true, 00:08:12.069 "flush": true, 00:08:12.069 "reset": true, 00:08:12.069 "nvme_admin": false, 00:08:12.069 "nvme_io": false, 00:08:12.069 "nvme_io_md": false, 00:08:12.069 "write_zeroes": true, 00:08:12.069 "zcopy": true, 00:08:12.069 "get_zone_info": false, 00:08:12.069 "zone_management": false, 00:08:12.069 "zone_append": false, 00:08:12.069 "compare": false, 00:08:12.069 "compare_and_write": false, 00:08:12.069 "abort": true, 00:08:12.069 "seek_hole": false, 00:08:12.069 "seek_data": false, 00:08:12.069 "copy": true, 00:08:12.069 "nvme_iov_md": false 00:08:12.069 }, 00:08:12.069 "memory_domains": [ 00:08:12.069 { 00:08:12.069 "dma_device_id": "system", 00:08:12.069 "dma_device_type": 1 00:08:12.069 }, 00:08:12.069 { 00:08:12.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.069 "dma_device_type": 2 00:08:12.069 } 00:08:12.069 ], 00:08:12.069 "driver_specific": {} 00:08:12.069 } 00:08:12.069 ] 00:08:12.069 12:00:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.069 12:00:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:12.069 12:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:12.069 12:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:12.069 12:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:12.069 12:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:12.069 12:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:12.069 12:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:12.069 12:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:12.069 12:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:12.069 12:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:12.069 12:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:12.069 12:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:12.069 12:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.069 12:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.069 12:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:12.069 12:00:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.069 12:00:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.070 12:00:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.070 12:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:12.070 "name": "Existed_Raid", 00:08:12.070 "uuid": "224284ca-21af-4b73-b633-82b7c5608155", 00:08:12.070 "strip_size_kb": 0, 00:08:12.070 "state": "online", 00:08:12.070 "raid_level": "raid1", 00:08:12.070 "superblock": false, 00:08:12.070 "num_base_bdevs": 2, 00:08:12.070 "num_base_bdevs_discovered": 2, 00:08:12.070 "num_base_bdevs_operational": 2, 00:08:12.070 "base_bdevs_list": [ 00:08:12.070 { 00:08:12.070 "name": "BaseBdev1", 00:08:12.070 "uuid": "85bb60ba-6663-4f18-a789-3091eea7cf70", 00:08:12.070 "is_configured": true, 00:08:12.070 "data_offset": 0, 00:08:12.070 "data_size": 65536 00:08:12.070 }, 00:08:12.070 { 00:08:12.070 "name": "BaseBdev2", 00:08:12.070 "uuid": "e412902a-1330-490d-a7e7-38b7857f017d", 00:08:12.070 "is_configured": true, 00:08:12.070 "data_offset": 0, 00:08:12.070 "data_size": 65536 00:08:12.070 } 00:08:12.070 ] 00:08:12.070 }' 00:08:12.070 12:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:12.070 12:00:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.329 12:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:12.329 12:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:12.329 12:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:12.329 12:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:12.329 12:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:12.329 12:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:12.329 12:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:12.329 12:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:12.329 12:00:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.329 12:00:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.329 [2024-11-19 12:00:15.686895] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:12.589 12:00:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.589 12:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:12.589 "name": "Existed_Raid", 00:08:12.589 "aliases": [ 00:08:12.589 "224284ca-21af-4b73-b633-82b7c5608155" 00:08:12.589 ], 00:08:12.589 "product_name": "Raid Volume", 00:08:12.589 "block_size": 512, 00:08:12.589 "num_blocks": 65536, 00:08:12.589 "uuid": "224284ca-21af-4b73-b633-82b7c5608155", 00:08:12.589 "assigned_rate_limits": { 00:08:12.589 "rw_ios_per_sec": 0, 00:08:12.589 "rw_mbytes_per_sec": 0, 00:08:12.589 "r_mbytes_per_sec": 0, 00:08:12.589 "w_mbytes_per_sec": 0 00:08:12.589 }, 00:08:12.589 "claimed": false, 00:08:12.589 "zoned": false, 00:08:12.589 "supported_io_types": { 00:08:12.589 "read": true, 00:08:12.589 "write": true, 00:08:12.589 "unmap": false, 00:08:12.589 "flush": false, 00:08:12.589 "reset": true, 00:08:12.589 "nvme_admin": false, 00:08:12.589 "nvme_io": false, 00:08:12.589 "nvme_io_md": false, 00:08:12.589 "write_zeroes": true, 00:08:12.589 "zcopy": false, 00:08:12.589 "get_zone_info": false, 00:08:12.589 "zone_management": false, 00:08:12.589 "zone_append": false, 00:08:12.589 "compare": false, 00:08:12.589 "compare_and_write": false, 00:08:12.589 "abort": false, 00:08:12.589 "seek_hole": false, 00:08:12.589 "seek_data": false, 00:08:12.589 "copy": false, 00:08:12.589 "nvme_iov_md": false 00:08:12.589 }, 00:08:12.589 "memory_domains": [ 00:08:12.589 { 00:08:12.589 "dma_device_id": "system", 00:08:12.589 "dma_device_type": 1 00:08:12.589 }, 00:08:12.589 { 00:08:12.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.589 "dma_device_type": 2 00:08:12.589 }, 00:08:12.589 { 00:08:12.589 "dma_device_id": "system", 00:08:12.589 "dma_device_type": 1 00:08:12.589 }, 00:08:12.589 { 00:08:12.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.589 "dma_device_type": 2 00:08:12.589 } 00:08:12.589 ], 00:08:12.589 "driver_specific": { 00:08:12.589 "raid": { 00:08:12.589 "uuid": "224284ca-21af-4b73-b633-82b7c5608155", 00:08:12.589 "strip_size_kb": 0, 00:08:12.589 "state": "online", 00:08:12.589 "raid_level": "raid1", 00:08:12.589 "superblock": false, 00:08:12.589 "num_base_bdevs": 2, 00:08:12.589 "num_base_bdevs_discovered": 2, 00:08:12.589 "num_base_bdevs_operational": 2, 00:08:12.589 "base_bdevs_list": [ 00:08:12.589 { 00:08:12.589 "name": "BaseBdev1", 00:08:12.589 "uuid": "85bb60ba-6663-4f18-a789-3091eea7cf70", 00:08:12.589 "is_configured": true, 00:08:12.589 "data_offset": 0, 00:08:12.589 "data_size": 65536 00:08:12.589 }, 00:08:12.589 { 00:08:12.590 "name": "BaseBdev2", 00:08:12.590 "uuid": "e412902a-1330-490d-a7e7-38b7857f017d", 00:08:12.590 "is_configured": true, 00:08:12.590 "data_offset": 0, 00:08:12.590 "data_size": 65536 00:08:12.590 } 00:08:12.590 ] 00:08:12.590 } 00:08:12.590 } 00:08:12.590 }' 00:08:12.590 12:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:12.590 12:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:12.590 BaseBdev2' 00:08:12.590 12:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:12.590 12:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:12.590 12:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:12.590 12:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:12.590 12:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:12.590 12:00:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.590 12:00:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.590 12:00:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.590 12:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:12.590 12:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:12.590 12:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:12.590 12:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:12.590 12:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:12.590 12:00:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.590 12:00:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.590 12:00:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.590 12:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:12.590 12:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:12.590 12:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:12.590 12:00:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.590 12:00:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.590 [2024-11-19 12:00:15.894304] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:12.849 12:00:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.849 12:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:12.849 12:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:12.849 12:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:12.849 12:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:12.849 12:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:12.849 12:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:12.849 12:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:12.849 12:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:12.849 12:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:12.849 12:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:12.849 12:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:12.849 12:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:12.849 12:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:12.849 12:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:12.849 12:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.849 12:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:12.849 12:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.849 12:00:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.849 12:00:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.849 12:00:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.849 12:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:12.849 "name": "Existed_Raid", 00:08:12.849 "uuid": "224284ca-21af-4b73-b633-82b7c5608155", 00:08:12.849 "strip_size_kb": 0, 00:08:12.849 "state": "online", 00:08:12.849 "raid_level": "raid1", 00:08:12.849 "superblock": false, 00:08:12.849 "num_base_bdevs": 2, 00:08:12.849 "num_base_bdevs_discovered": 1, 00:08:12.849 "num_base_bdevs_operational": 1, 00:08:12.849 "base_bdevs_list": [ 00:08:12.849 { 00:08:12.849 "name": null, 00:08:12.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:12.849 "is_configured": false, 00:08:12.849 "data_offset": 0, 00:08:12.849 "data_size": 65536 00:08:12.849 }, 00:08:12.849 { 00:08:12.849 "name": "BaseBdev2", 00:08:12.849 "uuid": "e412902a-1330-490d-a7e7-38b7857f017d", 00:08:12.849 "is_configured": true, 00:08:12.849 "data_offset": 0, 00:08:12.849 "data_size": 65536 00:08:12.849 } 00:08:12.849 ] 00:08:12.849 }' 00:08:12.849 12:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:12.849 12:00:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.112 12:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:13.112 12:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:13.112 12:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:13.112 12:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.112 12:00:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.112 12:00:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.112 12:00:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.112 12:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:13.112 12:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:13.112 12:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:13.112 12:00:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.112 12:00:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.112 [2024-11-19 12:00:16.454465] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:13.112 [2024-11-19 12:00:16.454580] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:13.371 [2024-11-19 12:00:16.558937] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:13.371 [2024-11-19 12:00:16.559143] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:13.371 [2024-11-19 12:00:16.559167] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:13.371 12:00:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.371 12:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:13.371 12:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:13.371 12:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:13.371 12:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.371 12:00:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.371 12:00:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.371 12:00:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.371 12:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:13.371 12:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:13.371 12:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:13.371 12:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62758 00:08:13.371 12:00:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 62758 ']' 00:08:13.371 12:00:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 62758 00:08:13.371 12:00:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:13.371 12:00:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:13.371 12:00:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62758 00:08:13.371 12:00:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:13.371 12:00:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:13.371 killing process with pid 62758 00:08:13.371 12:00:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62758' 00:08:13.371 12:00:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 62758 00:08:13.371 [2024-11-19 12:00:16.657890] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:13.371 12:00:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 62758 00:08:13.371 [2024-11-19 12:00:16.675803] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:14.750 12:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:14.750 00:08:14.750 real 0m5.211s 00:08:14.750 user 0m7.375s 00:08:14.750 sys 0m0.917s 00:08:14.750 12:00:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:14.750 12:00:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.750 ************************************ 00:08:14.750 END TEST raid_state_function_test 00:08:14.750 ************************************ 00:08:14.750 12:00:17 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:08:14.750 12:00:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:14.750 12:00:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:14.750 12:00:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:14.750 ************************************ 00:08:14.750 START TEST raid_state_function_test_sb 00:08:14.750 ************************************ 00:08:14.750 12:00:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:08:14.750 12:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:14.750 12:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:14.750 12:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:14.750 12:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:14.750 12:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:14.750 12:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:14.750 12:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:14.750 12:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:14.750 12:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:14.750 12:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:14.750 12:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:14.750 12:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:14.750 12:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:14.750 12:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:14.750 12:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:14.750 12:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:14.750 12:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:14.750 12:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:14.750 12:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:14.750 12:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:14.750 12:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:14.750 12:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:14.750 Process raid pid: 63010 00:08:14.750 12:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=63010 00:08:14.750 12:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:14.750 12:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63010' 00:08:14.750 12:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 63010 00:08:14.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:14.750 12:00:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 63010 ']' 00:08:14.750 12:00:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:14.750 12:00:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:14.750 12:00:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:14.750 12:00:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:14.750 12:00:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.750 [2024-11-19 12:00:18.064003] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:08:14.750 [2024-11-19 12:00:18.064306] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:15.010 [2024-11-19 12:00:18.250410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.269 [2024-11-19 12:00:18.389189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.269 [2024-11-19 12:00:18.630729] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:15.269 [2024-11-19 12:00:18.630778] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:15.528 12:00:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:15.528 12:00:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:15.528 12:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:15.528 12:00:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.528 12:00:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.528 [2024-11-19 12:00:18.899968] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:15.528 [2024-11-19 12:00:18.900050] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:15.528 [2024-11-19 12:00:18.900062] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:15.528 [2024-11-19 12:00:18.900073] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:15.788 12:00:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.788 12:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:15.788 12:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:15.788 12:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:15.788 12:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:15.788 12:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:15.788 12:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:15.788 12:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.788 12:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.788 12:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.788 12:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.788 12:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.788 12:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:15.788 12:00:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.788 12:00:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.788 12:00:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.788 12:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.788 "name": "Existed_Raid", 00:08:15.788 "uuid": "57e4ef18-e763-452c-81a8-de49bce1a083", 00:08:15.788 "strip_size_kb": 0, 00:08:15.788 "state": "configuring", 00:08:15.788 "raid_level": "raid1", 00:08:15.788 "superblock": true, 00:08:15.788 "num_base_bdevs": 2, 00:08:15.788 "num_base_bdevs_discovered": 0, 00:08:15.788 "num_base_bdevs_operational": 2, 00:08:15.788 "base_bdevs_list": [ 00:08:15.788 { 00:08:15.788 "name": "BaseBdev1", 00:08:15.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.788 "is_configured": false, 00:08:15.788 "data_offset": 0, 00:08:15.788 "data_size": 0 00:08:15.788 }, 00:08:15.788 { 00:08:15.788 "name": "BaseBdev2", 00:08:15.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.788 "is_configured": false, 00:08:15.788 "data_offset": 0, 00:08:15.788 "data_size": 0 00:08:15.788 } 00:08:15.788 ] 00:08:15.788 }' 00:08:15.788 12:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.788 12:00:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.047 12:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:16.047 12:00:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.047 12:00:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.047 [2024-11-19 12:00:19.371120] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:16.047 [2024-11-19 12:00:19.371253] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:16.047 12:00:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.047 12:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:16.047 12:00:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.047 12:00:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.047 [2024-11-19 12:00:19.379081] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:16.047 [2024-11-19 12:00:19.379177] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:16.047 [2024-11-19 12:00:19.379207] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:16.047 [2024-11-19 12:00:19.379235] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:16.047 12:00:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.047 12:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:16.047 12:00:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.047 12:00:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.306 [2024-11-19 12:00:19.427887] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:16.306 BaseBdev1 00:08:16.306 12:00:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.306 12:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:16.306 12:00:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:16.306 12:00:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:16.306 12:00:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:16.306 12:00:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:16.306 12:00:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:16.306 12:00:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:16.306 12:00:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.306 12:00:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.306 12:00:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.306 12:00:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:16.306 12:00:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.306 12:00:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.306 [ 00:08:16.306 { 00:08:16.306 "name": "BaseBdev1", 00:08:16.306 "aliases": [ 00:08:16.306 "77f7837d-a44f-4e26-b98a-62962003b249" 00:08:16.306 ], 00:08:16.306 "product_name": "Malloc disk", 00:08:16.306 "block_size": 512, 00:08:16.306 "num_blocks": 65536, 00:08:16.306 "uuid": "77f7837d-a44f-4e26-b98a-62962003b249", 00:08:16.306 "assigned_rate_limits": { 00:08:16.306 "rw_ios_per_sec": 0, 00:08:16.306 "rw_mbytes_per_sec": 0, 00:08:16.306 "r_mbytes_per_sec": 0, 00:08:16.306 "w_mbytes_per_sec": 0 00:08:16.306 }, 00:08:16.306 "claimed": true, 00:08:16.306 "claim_type": "exclusive_write", 00:08:16.306 "zoned": false, 00:08:16.306 "supported_io_types": { 00:08:16.306 "read": true, 00:08:16.306 "write": true, 00:08:16.306 "unmap": true, 00:08:16.306 "flush": true, 00:08:16.306 "reset": true, 00:08:16.306 "nvme_admin": false, 00:08:16.306 "nvme_io": false, 00:08:16.306 "nvme_io_md": false, 00:08:16.306 "write_zeroes": true, 00:08:16.306 "zcopy": true, 00:08:16.306 "get_zone_info": false, 00:08:16.306 "zone_management": false, 00:08:16.306 "zone_append": false, 00:08:16.306 "compare": false, 00:08:16.306 "compare_and_write": false, 00:08:16.306 "abort": true, 00:08:16.306 "seek_hole": false, 00:08:16.306 "seek_data": false, 00:08:16.306 "copy": true, 00:08:16.306 "nvme_iov_md": false 00:08:16.306 }, 00:08:16.306 "memory_domains": [ 00:08:16.306 { 00:08:16.306 "dma_device_id": "system", 00:08:16.306 "dma_device_type": 1 00:08:16.306 }, 00:08:16.306 { 00:08:16.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.306 "dma_device_type": 2 00:08:16.306 } 00:08:16.306 ], 00:08:16.306 "driver_specific": {} 00:08:16.306 } 00:08:16.306 ] 00:08:16.306 12:00:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.306 12:00:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:16.306 12:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:16.306 12:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:16.306 12:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:16.306 12:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:16.306 12:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:16.306 12:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:16.307 12:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.307 12:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.307 12:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.307 12:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.307 12:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.307 12:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:16.307 12:00:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.307 12:00:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.307 12:00:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.307 12:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.307 "name": "Existed_Raid", 00:08:16.307 "uuid": "ea62d54b-7712-40cb-9b0b-cfa2702962e3", 00:08:16.307 "strip_size_kb": 0, 00:08:16.307 "state": "configuring", 00:08:16.307 "raid_level": "raid1", 00:08:16.307 "superblock": true, 00:08:16.307 "num_base_bdevs": 2, 00:08:16.307 "num_base_bdevs_discovered": 1, 00:08:16.307 "num_base_bdevs_operational": 2, 00:08:16.307 "base_bdevs_list": [ 00:08:16.307 { 00:08:16.307 "name": "BaseBdev1", 00:08:16.307 "uuid": "77f7837d-a44f-4e26-b98a-62962003b249", 00:08:16.307 "is_configured": true, 00:08:16.307 "data_offset": 2048, 00:08:16.307 "data_size": 63488 00:08:16.307 }, 00:08:16.307 { 00:08:16.307 "name": "BaseBdev2", 00:08:16.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.307 "is_configured": false, 00:08:16.307 "data_offset": 0, 00:08:16.307 "data_size": 0 00:08:16.307 } 00:08:16.307 ] 00:08:16.307 }' 00:08:16.307 12:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.307 12:00:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.567 12:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:16.567 12:00:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.567 12:00:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.567 [2024-11-19 12:00:19.907169] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:16.567 [2024-11-19 12:00:19.907334] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:16.567 12:00:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.567 12:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:16.567 12:00:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.567 12:00:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.567 [2024-11-19 12:00:19.919210] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:16.567 [2024-11-19 12:00:19.921351] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:16.567 [2024-11-19 12:00:19.921397] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:16.567 12:00:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.567 12:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:16.567 12:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:16.567 12:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:16.567 12:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:16.567 12:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:16.567 12:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:16.567 12:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:16.567 12:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:16.567 12:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.567 12:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.567 12:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.567 12:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.567 12:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:16.567 12:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.567 12:00:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.567 12:00:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.826 12:00:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.826 12:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.826 "name": "Existed_Raid", 00:08:16.826 "uuid": "297ff783-7485-4980-96eb-ddf8924200b6", 00:08:16.826 "strip_size_kb": 0, 00:08:16.826 "state": "configuring", 00:08:16.826 "raid_level": "raid1", 00:08:16.826 "superblock": true, 00:08:16.826 "num_base_bdevs": 2, 00:08:16.826 "num_base_bdevs_discovered": 1, 00:08:16.826 "num_base_bdevs_operational": 2, 00:08:16.826 "base_bdevs_list": [ 00:08:16.826 { 00:08:16.826 "name": "BaseBdev1", 00:08:16.826 "uuid": "77f7837d-a44f-4e26-b98a-62962003b249", 00:08:16.826 "is_configured": true, 00:08:16.826 "data_offset": 2048, 00:08:16.826 "data_size": 63488 00:08:16.826 }, 00:08:16.826 { 00:08:16.826 "name": "BaseBdev2", 00:08:16.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.826 "is_configured": false, 00:08:16.826 "data_offset": 0, 00:08:16.826 "data_size": 0 00:08:16.826 } 00:08:16.826 ] 00:08:16.826 }' 00:08:16.826 12:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.827 12:00:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.087 12:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:17.087 12:00:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.087 12:00:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.087 [2024-11-19 12:00:20.353983] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:17.087 [2024-11-19 12:00:20.354453] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:17.087 [2024-11-19 12:00:20.354508] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:17.087 [2024-11-19 12:00:20.354822] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:17.087 [2024-11-19 12:00:20.355062] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:17.087 [2024-11-19 12:00:20.355117] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:17.087 BaseBdev2 00:08:17.087 [2024-11-19 12:00:20.355322] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:17.087 12:00:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.087 12:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:17.087 12:00:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:17.087 12:00:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:17.087 12:00:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:17.087 12:00:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:17.087 12:00:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:17.087 12:00:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:17.087 12:00:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.087 12:00:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.087 12:00:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.087 12:00:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:17.087 12:00:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.087 12:00:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.087 [ 00:08:17.087 { 00:08:17.087 "name": "BaseBdev2", 00:08:17.087 "aliases": [ 00:08:17.087 "8a9dd3fa-fbc7-4d3a-8a70-ef33112e2fee" 00:08:17.087 ], 00:08:17.087 "product_name": "Malloc disk", 00:08:17.087 "block_size": 512, 00:08:17.087 "num_blocks": 65536, 00:08:17.087 "uuid": "8a9dd3fa-fbc7-4d3a-8a70-ef33112e2fee", 00:08:17.087 "assigned_rate_limits": { 00:08:17.087 "rw_ios_per_sec": 0, 00:08:17.087 "rw_mbytes_per_sec": 0, 00:08:17.087 "r_mbytes_per_sec": 0, 00:08:17.087 "w_mbytes_per_sec": 0 00:08:17.087 }, 00:08:17.087 "claimed": true, 00:08:17.087 "claim_type": "exclusive_write", 00:08:17.087 "zoned": false, 00:08:17.087 "supported_io_types": { 00:08:17.087 "read": true, 00:08:17.087 "write": true, 00:08:17.087 "unmap": true, 00:08:17.087 "flush": true, 00:08:17.087 "reset": true, 00:08:17.087 "nvme_admin": false, 00:08:17.087 "nvme_io": false, 00:08:17.087 "nvme_io_md": false, 00:08:17.087 "write_zeroes": true, 00:08:17.087 "zcopy": true, 00:08:17.087 "get_zone_info": false, 00:08:17.087 "zone_management": false, 00:08:17.087 "zone_append": false, 00:08:17.087 "compare": false, 00:08:17.087 "compare_and_write": false, 00:08:17.087 "abort": true, 00:08:17.087 "seek_hole": false, 00:08:17.087 "seek_data": false, 00:08:17.087 "copy": true, 00:08:17.087 "nvme_iov_md": false 00:08:17.087 }, 00:08:17.087 "memory_domains": [ 00:08:17.087 { 00:08:17.087 "dma_device_id": "system", 00:08:17.087 "dma_device_type": 1 00:08:17.087 }, 00:08:17.087 { 00:08:17.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.087 "dma_device_type": 2 00:08:17.087 } 00:08:17.087 ], 00:08:17.087 "driver_specific": {} 00:08:17.087 } 00:08:17.087 ] 00:08:17.087 12:00:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.087 12:00:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:17.087 12:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:17.087 12:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:17.087 12:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:17.087 12:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:17.087 12:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:17.088 12:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:17.088 12:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:17.088 12:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:17.088 12:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:17.088 12:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:17.088 12:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:17.088 12:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:17.088 12:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.088 12:00:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.088 12:00:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.088 12:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:17.088 12:00:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.088 12:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:17.088 "name": "Existed_Raid", 00:08:17.088 "uuid": "297ff783-7485-4980-96eb-ddf8924200b6", 00:08:17.088 "strip_size_kb": 0, 00:08:17.088 "state": "online", 00:08:17.088 "raid_level": "raid1", 00:08:17.088 "superblock": true, 00:08:17.088 "num_base_bdevs": 2, 00:08:17.088 "num_base_bdevs_discovered": 2, 00:08:17.088 "num_base_bdevs_operational": 2, 00:08:17.088 "base_bdevs_list": [ 00:08:17.088 { 00:08:17.088 "name": "BaseBdev1", 00:08:17.088 "uuid": "77f7837d-a44f-4e26-b98a-62962003b249", 00:08:17.088 "is_configured": true, 00:08:17.088 "data_offset": 2048, 00:08:17.088 "data_size": 63488 00:08:17.088 }, 00:08:17.088 { 00:08:17.088 "name": "BaseBdev2", 00:08:17.088 "uuid": "8a9dd3fa-fbc7-4d3a-8a70-ef33112e2fee", 00:08:17.088 "is_configured": true, 00:08:17.088 "data_offset": 2048, 00:08:17.088 "data_size": 63488 00:08:17.088 } 00:08:17.088 ] 00:08:17.088 }' 00:08:17.088 12:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:17.088 12:00:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.657 12:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:17.657 12:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:17.657 12:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:17.657 12:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:17.657 12:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:17.657 12:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:17.657 12:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:17.657 12:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:17.657 12:00:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.657 12:00:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.657 [2024-11-19 12:00:20.849499] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:17.657 12:00:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.657 12:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:17.657 "name": "Existed_Raid", 00:08:17.657 "aliases": [ 00:08:17.657 "297ff783-7485-4980-96eb-ddf8924200b6" 00:08:17.657 ], 00:08:17.657 "product_name": "Raid Volume", 00:08:17.657 "block_size": 512, 00:08:17.657 "num_blocks": 63488, 00:08:17.657 "uuid": "297ff783-7485-4980-96eb-ddf8924200b6", 00:08:17.657 "assigned_rate_limits": { 00:08:17.657 "rw_ios_per_sec": 0, 00:08:17.657 "rw_mbytes_per_sec": 0, 00:08:17.657 "r_mbytes_per_sec": 0, 00:08:17.657 "w_mbytes_per_sec": 0 00:08:17.657 }, 00:08:17.657 "claimed": false, 00:08:17.657 "zoned": false, 00:08:17.657 "supported_io_types": { 00:08:17.657 "read": true, 00:08:17.657 "write": true, 00:08:17.657 "unmap": false, 00:08:17.657 "flush": false, 00:08:17.657 "reset": true, 00:08:17.657 "nvme_admin": false, 00:08:17.657 "nvme_io": false, 00:08:17.657 "nvme_io_md": false, 00:08:17.657 "write_zeroes": true, 00:08:17.657 "zcopy": false, 00:08:17.657 "get_zone_info": false, 00:08:17.657 "zone_management": false, 00:08:17.657 "zone_append": false, 00:08:17.657 "compare": false, 00:08:17.657 "compare_and_write": false, 00:08:17.657 "abort": false, 00:08:17.657 "seek_hole": false, 00:08:17.657 "seek_data": false, 00:08:17.657 "copy": false, 00:08:17.657 "nvme_iov_md": false 00:08:17.657 }, 00:08:17.657 "memory_domains": [ 00:08:17.657 { 00:08:17.657 "dma_device_id": "system", 00:08:17.657 "dma_device_type": 1 00:08:17.657 }, 00:08:17.657 { 00:08:17.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.657 "dma_device_type": 2 00:08:17.657 }, 00:08:17.657 { 00:08:17.657 "dma_device_id": "system", 00:08:17.657 "dma_device_type": 1 00:08:17.657 }, 00:08:17.657 { 00:08:17.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.657 "dma_device_type": 2 00:08:17.657 } 00:08:17.657 ], 00:08:17.657 "driver_specific": { 00:08:17.657 "raid": { 00:08:17.657 "uuid": "297ff783-7485-4980-96eb-ddf8924200b6", 00:08:17.657 "strip_size_kb": 0, 00:08:17.657 "state": "online", 00:08:17.657 "raid_level": "raid1", 00:08:17.657 "superblock": true, 00:08:17.657 "num_base_bdevs": 2, 00:08:17.657 "num_base_bdevs_discovered": 2, 00:08:17.657 "num_base_bdevs_operational": 2, 00:08:17.657 "base_bdevs_list": [ 00:08:17.657 { 00:08:17.657 "name": "BaseBdev1", 00:08:17.657 "uuid": "77f7837d-a44f-4e26-b98a-62962003b249", 00:08:17.657 "is_configured": true, 00:08:17.657 "data_offset": 2048, 00:08:17.657 "data_size": 63488 00:08:17.657 }, 00:08:17.657 { 00:08:17.657 "name": "BaseBdev2", 00:08:17.657 "uuid": "8a9dd3fa-fbc7-4d3a-8a70-ef33112e2fee", 00:08:17.657 "is_configured": true, 00:08:17.657 "data_offset": 2048, 00:08:17.657 "data_size": 63488 00:08:17.657 } 00:08:17.657 ] 00:08:17.657 } 00:08:17.657 } 00:08:17.657 }' 00:08:17.657 12:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:17.657 12:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:17.657 BaseBdev2' 00:08:17.657 12:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:17.657 12:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:17.657 12:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:17.657 12:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:17.657 12:00:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.657 12:00:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.657 12:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:17.657 12:00:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.657 12:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:17.657 12:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:17.657 12:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:17.657 12:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:17.657 12:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:17.657 12:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.657 12:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.917 12:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.917 12:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:17.917 12:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:17.917 12:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:17.917 12:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.917 12:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.917 [2024-11-19 12:00:21.068865] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:17.917 12:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.917 12:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:17.917 12:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:17.917 12:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:17.917 12:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:08:17.917 12:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:17.917 12:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:17.917 12:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:17.917 12:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:17.917 12:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:17.917 12:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:17.917 12:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:17.917 12:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:17.917 12:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:17.917 12:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:17.917 12:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:17.917 12:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.917 12:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:17.917 12:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.917 12:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.917 12:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.917 12:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:17.917 "name": "Existed_Raid", 00:08:17.917 "uuid": "297ff783-7485-4980-96eb-ddf8924200b6", 00:08:17.917 "strip_size_kb": 0, 00:08:17.917 "state": "online", 00:08:17.917 "raid_level": "raid1", 00:08:17.917 "superblock": true, 00:08:17.917 "num_base_bdevs": 2, 00:08:17.917 "num_base_bdevs_discovered": 1, 00:08:17.917 "num_base_bdevs_operational": 1, 00:08:17.917 "base_bdevs_list": [ 00:08:17.917 { 00:08:17.917 "name": null, 00:08:17.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:17.917 "is_configured": false, 00:08:17.917 "data_offset": 0, 00:08:17.917 "data_size": 63488 00:08:17.917 }, 00:08:17.918 { 00:08:17.918 "name": "BaseBdev2", 00:08:17.918 "uuid": "8a9dd3fa-fbc7-4d3a-8a70-ef33112e2fee", 00:08:17.918 "is_configured": true, 00:08:17.918 "data_offset": 2048, 00:08:17.918 "data_size": 63488 00:08:17.918 } 00:08:17.918 ] 00:08:17.918 }' 00:08:17.918 12:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:17.918 12:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.514 12:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:18.514 12:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:18.515 12:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.515 12:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.515 12:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:18.515 12:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.515 12:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.515 12:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:18.515 12:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:18.515 12:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:18.515 12:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.515 12:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.515 [2024-11-19 12:00:21.677735] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:18.515 [2024-11-19 12:00:21.677939] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:18.515 [2024-11-19 12:00:21.782232] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:18.515 [2024-11-19 12:00:21.782409] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:18.515 [2024-11-19 12:00:21.782430] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:18.515 12:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.515 12:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:18.515 12:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:18.515 12:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:18.515 12:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.515 12:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.515 12:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.515 12:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.515 12:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:18.515 12:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:18.515 12:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:18.515 12:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 63010 00:08:18.515 12:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 63010 ']' 00:08:18.515 12:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 63010 00:08:18.515 12:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:18.515 12:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:18.515 12:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63010 00:08:18.515 12:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:18.515 killing process with pid 63010 00:08:18.515 12:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:18.515 12:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63010' 00:08:18.515 12:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 63010 00:08:18.515 [2024-11-19 12:00:21.881443] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:18.515 12:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 63010 00:08:18.775 [2024-11-19 12:00:21.899388] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:20.156 12:00:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:20.156 00:08:20.156 real 0m5.148s 00:08:20.156 user 0m7.258s 00:08:20.156 sys 0m0.926s 00:08:20.156 ************************************ 00:08:20.156 12:00:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:20.156 12:00:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.156 END TEST raid_state_function_test_sb 00:08:20.156 ************************************ 00:08:20.156 12:00:23 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:08:20.156 12:00:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:20.156 12:00:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:20.156 12:00:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:20.156 ************************************ 00:08:20.156 START TEST raid_superblock_test 00:08:20.156 ************************************ 00:08:20.156 12:00:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:08:20.156 12:00:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:08:20.156 12:00:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:20.156 12:00:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:20.156 12:00:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:20.156 12:00:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:20.156 12:00:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:20.156 12:00:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:20.156 12:00:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:20.156 12:00:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:20.156 12:00:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:20.156 12:00:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:20.156 12:00:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:20.156 12:00:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:20.156 12:00:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:08:20.156 12:00:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:08:20.156 12:00:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63258 00:08:20.156 12:00:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:20.156 12:00:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63258 00:08:20.156 12:00:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 63258 ']' 00:08:20.156 12:00:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:20.156 12:00:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:20.156 12:00:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:20.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:20.156 12:00:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:20.156 12:00:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.156 [2024-11-19 12:00:23.267904] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:08:20.156 [2024-11-19 12:00:23.268138] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63258 ] 00:08:20.156 [2024-11-19 12:00:23.448076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.416 [2024-11-19 12:00:23.587540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.675 [2024-11-19 12:00:23.817430] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:20.675 [2024-11-19 12:00:23.817589] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:20.935 12:00:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:20.935 12:00:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:20.935 12:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:20.935 12:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:20.935 12:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:20.935 12:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:20.935 12:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:20.935 12:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:20.935 12:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:20.935 12:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:20.935 12:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:20.935 12:00:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.935 12:00:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.935 malloc1 00:08:20.935 12:00:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.935 12:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:20.935 12:00:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.935 12:00:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.935 [2024-11-19 12:00:24.167316] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:20.935 [2024-11-19 12:00:24.167399] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:20.935 [2024-11-19 12:00:24.167427] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:20.935 [2024-11-19 12:00:24.167437] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:20.935 [2024-11-19 12:00:24.169823] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:20.935 [2024-11-19 12:00:24.169932] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:20.935 pt1 00:08:20.935 12:00:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.935 12:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:20.935 12:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:20.935 12:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:20.935 12:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:20.935 12:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:20.935 12:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:20.935 12:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:20.935 12:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:20.935 12:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:20.935 12:00:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.935 12:00:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.935 malloc2 00:08:20.935 12:00:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.935 12:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:20.935 12:00:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.935 12:00:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.935 [2024-11-19 12:00:24.229295] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:20.935 [2024-11-19 12:00:24.229445] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:20.935 [2024-11-19 12:00:24.229489] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:20.935 [2024-11-19 12:00:24.229520] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:20.935 [2024-11-19 12:00:24.231984] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:20.935 [2024-11-19 12:00:24.232081] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:20.935 pt2 00:08:20.935 12:00:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.935 12:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:20.935 12:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:20.935 12:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:20.935 12:00:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.935 12:00:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.935 [2024-11-19 12:00:24.241349] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:20.936 [2024-11-19 12:00:24.243575] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:20.936 [2024-11-19 12:00:24.243825] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:20.936 [2024-11-19 12:00:24.243883] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:20.936 [2024-11-19 12:00:24.244207] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:20.936 [2024-11-19 12:00:24.244427] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:20.936 [2024-11-19 12:00:24.244476] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:20.936 [2024-11-19 12:00:24.244701] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:20.936 12:00:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.936 12:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:20.936 12:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:20.936 12:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:20.936 12:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:20.936 12:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:20.936 12:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:20.936 12:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:20.936 12:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:20.936 12:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:20.936 12:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:20.936 12:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.936 12:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:20.936 12:00:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.936 12:00:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.936 12:00:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.936 12:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.936 "name": "raid_bdev1", 00:08:20.936 "uuid": "28d4ca5e-2c87-4b29-b214-1a1426767510", 00:08:20.936 "strip_size_kb": 0, 00:08:20.936 "state": "online", 00:08:20.936 "raid_level": "raid1", 00:08:20.936 "superblock": true, 00:08:20.936 "num_base_bdevs": 2, 00:08:20.936 "num_base_bdevs_discovered": 2, 00:08:20.936 "num_base_bdevs_operational": 2, 00:08:20.936 "base_bdevs_list": [ 00:08:20.936 { 00:08:20.936 "name": "pt1", 00:08:20.936 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:20.936 "is_configured": true, 00:08:20.936 "data_offset": 2048, 00:08:20.936 "data_size": 63488 00:08:20.936 }, 00:08:20.936 { 00:08:20.936 "name": "pt2", 00:08:20.936 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:20.936 "is_configured": true, 00:08:20.936 "data_offset": 2048, 00:08:20.936 "data_size": 63488 00:08:20.936 } 00:08:20.936 ] 00:08:20.936 }' 00:08:20.936 12:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.936 12:00:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.504 12:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:21.504 12:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:21.504 12:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:21.504 12:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:21.505 12:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:21.505 12:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:21.505 12:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:21.505 12:00:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.505 12:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:21.505 12:00:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.505 [2024-11-19 12:00:24.689911] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:21.505 12:00:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.505 12:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:21.505 "name": "raid_bdev1", 00:08:21.505 "aliases": [ 00:08:21.505 "28d4ca5e-2c87-4b29-b214-1a1426767510" 00:08:21.505 ], 00:08:21.505 "product_name": "Raid Volume", 00:08:21.505 "block_size": 512, 00:08:21.505 "num_blocks": 63488, 00:08:21.505 "uuid": "28d4ca5e-2c87-4b29-b214-1a1426767510", 00:08:21.505 "assigned_rate_limits": { 00:08:21.505 "rw_ios_per_sec": 0, 00:08:21.505 "rw_mbytes_per_sec": 0, 00:08:21.505 "r_mbytes_per_sec": 0, 00:08:21.505 "w_mbytes_per_sec": 0 00:08:21.505 }, 00:08:21.505 "claimed": false, 00:08:21.505 "zoned": false, 00:08:21.505 "supported_io_types": { 00:08:21.505 "read": true, 00:08:21.505 "write": true, 00:08:21.505 "unmap": false, 00:08:21.505 "flush": false, 00:08:21.505 "reset": true, 00:08:21.505 "nvme_admin": false, 00:08:21.505 "nvme_io": false, 00:08:21.505 "nvme_io_md": false, 00:08:21.505 "write_zeroes": true, 00:08:21.505 "zcopy": false, 00:08:21.505 "get_zone_info": false, 00:08:21.505 "zone_management": false, 00:08:21.505 "zone_append": false, 00:08:21.505 "compare": false, 00:08:21.505 "compare_and_write": false, 00:08:21.505 "abort": false, 00:08:21.505 "seek_hole": false, 00:08:21.505 "seek_data": false, 00:08:21.505 "copy": false, 00:08:21.505 "nvme_iov_md": false 00:08:21.505 }, 00:08:21.505 "memory_domains": [ 00:08:21.505 { 00:08:21.505 "dma_device_id": "system", 00:08:21.505 "dma_device_type": 1 00:08:21.505 }, 00:08:21.505 { 00:08:21.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.505 "dma_device_type": 2 00:08:21.505 }, 00:08:21.505 { 00:08:21.505 "dma_device_id": "system", 00:08:21.505 "dma_device_type": 1 00:08:21.505 }, 00:08:21.505 { 00:08:21.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.505 "dma_device_type": 2 00:08:21.505 } 00:08:21.505 ], 00:08:21.505 "driver_specific": { 00:08:21.505 "raid": { 00:08:21.505 "uuid": "28d4ca5e-2c87-4b29-b214-1a1426767510", 00:08:21.505 "strip_size_kb": 0, 00:08:21.505 "state": "online", 00:08:21.505 "raid_level": "raid1", 00:08:21.505 "superblock": true, 00:08:21.505 "num_base_bdevs": 2, 00:08:21.505 "num_base_bdevs_discovered": 2, 00:08:21.505 "num_base_bdevs_operational": 2, 00:08:21.505 "base_bdevs_list": [ 00:08:21.505 { 00:08:21.505 "name": "pt1", 00:08:21.505 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:21.505 "is_configured": true, 00:08:21.505 "data_offset": 2048, 00:08:21.505 "data_size": 63488 00:08:21.505 }, 00:08:21.505 { 00:08:21.505 "name": "pt2", 00:08:21.505 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:21.505 "is_configured": true, 00:08:21.505 "data_offset": 2048, 00:08:21.505 "data_size": 63488 00:08:21.505 } 00:08:21.505 ] 00:08:21.505 } 00:08:21.505 } 00:08:21.505 }' 00:08:21.505 12:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:21.505 12:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:21.505 pt2' 00:08:21.505 12:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:21.505 12:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:21.505 12:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:21.505 12:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:21.505 12:00:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.505 12:00:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.505 12:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:21.505 12:00:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.765 12:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:21.765 12:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:21.765 12:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:21.765 12:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:21.765 12:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:21.765 12:00:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.765 12:00:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.765 12:00:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.765 12:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:21.765 12:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:21.765 12:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:21.765 12:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:21.765 12:00:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.766 12:00:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.766 [2024-11-19 12:00:24.940642] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:21.766 12:00:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.766 12:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=28d4ca5e-2c87-4b29-b214-1a1426767510 00:08:21.766 12:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 28d4ca5e-2c87-4b29-b214-1a1426767510 ']' 00:08:21.766 12:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:21.766 12:00:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.766 12:00:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.766 [2024-11-19 12:00:24.972106] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:21.766 [2024-11-19 12:00:24.972142] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:21.766 [2024-11-19 12:00:24.972290] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:21.766 [2024-11-19 12:00:24.972374] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:21.766 [2024-11-19 12:00:24.972396] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:21.766 12:00:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.766 12:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.766 12:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:21.766 12:00:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.766 12:00:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.766 12:00:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.766 12:00:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:21.766 12:00:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:21.766 12:00:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:21.766 12:00:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:21.766 12:00:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.766 12:00:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.766 12:00:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.766 12:00:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:21.766 12:00:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:21.766 12:00:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.766 12:00:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.766 12:00:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.766 12:00:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:21.766 12:00:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:21.766 12:00:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.766 12:00:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.766 12:00:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.766 12:00:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:21.766 12:00:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:21.766 12:00:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:21.766 12:00:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:21.766 12:00:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:21.766 12:00:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:21.766 12:00:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:21.766 12:00:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:21.766 12:00:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:21.766 12:00:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.766 12:00:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.766 [2024-11-19 12:00:25.111893] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:21.766 [2024-11-19 12:00:25.114110] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:21.766 [2024-11-19 12:00:25.114228] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:21.766 [2024-11-19 12:00:25.114303] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:21.766 [2024-11-19 12:00:25.114322] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:21.766 [2024-11-19 12:00:25.114333] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:21.766 request: 00:08:21.766 { 00:08:21.766 "name": "raid_bdev1", 00:08:21.766 "raid_level": "raid1", 00:08:21.766 "base_bdevs": [ 00:08:21.766 "malloc1", 00:08:21.766 "malloc2" 00:08:21.766 ], 00:08:21.766 "superblock": false, 00:08:21.766 "method": "bdev_raid_create", 00:08:21.766 "req_id": 1 00:08:21.766 } 00:08:21.766 Got JSON-RPC error response 00:08:21.766 response: 00:08:21.766 { 00:08:21.766 "code": -17, 00:08:21.766 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:21.766 } 00:08:21.766 12:00:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:21.766 12:00:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:21.766 12:00:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:21.766 12:00:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:21.766 12:00:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:21.766 12:00:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.766 12:00:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:21.766 12:00:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.766 12:00:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.766 12:00:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.026 12:00:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:22.026 12:00:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:22.026 12:00:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:22.026 12:00:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.026 12:00:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.026 [2024-11-19 12:00:25.179737] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:22.026 [2024-11-19 12:00:25.179858] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:22.026 [2024-11-19 12:00:25.179898] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:22.026 [2024-11-19 12:00:25.179937] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:22.026 [2024-11-19 12:00:25.182175] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:22.026 [2024-11-19 12:00:25.182252] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:22.026 [2024-11-19 12:00:25.182361] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:22.026 [2024-11-19 12:00:25.182443] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:22.026 pt1 00:08:22.026 12:00:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.026 12:00:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:08:22.026 12:00:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:22.026 12:00:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:22.026 12:00:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:22.026 12:00:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:22.026 12:00:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:22.026 12:00:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.026 12:00:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.026 12:00:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.026 12:00:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.026 12:00:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.026 12:00:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:22.026 12:00:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.026 12:00:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.026 12:00:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.026 12:00:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.026 "name": "raid_bdev1", 00:08:22.026 "uuid": "28d4ca5e-2c87-4b29-b214-1a1426767510", 00:08:22.026 "strip_size_kb": 0, 00:08:22.026 "state": "configuring", 00:08:22.026 "raid_level": "raid1", 00:08:22.026 "superblock": true, 00:08:22.026 "num_base_bdevs": 2, 00:08:22.026 "num_base_bdevs_discovered": 1, 00:08:22.026 "num_base_bdevs_operational": 2, 00:08:22.026 "base_bdevs_list": [ 00:08:22.026 { 00:08:22.026 "name": "pt1", 00:08:22.026 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:22.026 "is_configured": true, 00:08:22.026 "data_offset": 2048, 00:08:22.026 "data_size": 63488 00:08:22.026 }, 00:08:22.026 { 00:08:22.026 "name": null, 00:08:22.026 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:22.026 "is_configured": false, 00:08:22.026 "data_offset": 2048, 00:08:22.026 "data_size": 63488 00:08:22.026 } 00:08:22.026 ] 00:08:22.026 }' 00:08:22.026 12:00:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.026 12:00:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.286 12:00:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:22.286 12:00:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:22.286 12:00:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:22.286 12:00:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:22.286 12:00:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.286 12:00:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.286 [2024-11-19 12:00:25.631009] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:22.286 [2024-11-19 12:00:25.631092] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:22.286 [2024-11-19 12:00:25.631114] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:22.286 [2024-11-19 12:00:25.631126] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:22.286 [2024-11-19 12:00:25.631611] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:22.286 [2024-11-19 12:00:25.631632] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:22.286 [2024-11-19 12:00:25.631716] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:22.286 [2024-11-19 12:00:25.631743] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:22.286 [2024-11-19 12:00:25.631881] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:22.286 [2024-11-19 12:00:25.631893] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:22.286 [2024-11-19 12:00:25.632142] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:22.286 [2024-11-19 12:00:25.632319] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:22.286 [2024-11-19 12:00:25.632338] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:22.286 [2024-11-19 12:00:25.632470] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:22.286 pt2 00:08:22.286 12:00:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.286 12:00:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:22.286 12:00:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:22.286 12:00:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:22.286 12:00:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:22.286 12:00:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:22.286 12:00:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:22.286 12:00:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:22.286 12:00:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:22.286 12:00:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.286 12:00:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.286 12:00:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.286 12:00:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.286 12:00:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.286 12:00:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.286 12:00:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.286 12:00:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:22.286 12:00:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.546 12:00:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.546 "name": "raid_bdev1", 00:08:22.546 "uuid": "28d4ca5e-2c87-4b29-b214-1a1426767510", 00:08:22.546 "strip_size_kb": 0, 00:08:22.546 "state": "online", 00:08:22.546 "raid_level": "raid1", 00:08:22.546 "superblock": true, 00:08:22.546 "num_base_bdevs": 2, 00:08:22.546 "num_base_bdevs_discovered": 2, 00:08:22.546 "num_base_bdevs_operational": 2, 00:08:22.546 "base_bdevs_list": [ 00:08:22.546 { 00:08:22.546 "name": "pt1", 00:08:22.546 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:22.546 "is_configured": true, 00:08:22.546 "data_offset": 2048, 00:08:22.546 "data_size": 63488 00:08:22.546 }, 00:08:22.546 { 00:08:22.546 "name": "pt2", 00:08:22.546 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:22.546 "is_configured": true, 00:08:22.546 "data_offset": 2048, 00:08:22.546 "data_size": 63488 00:08:22.546 } 00:08:22.546 ] 00:08:22.546 }' 00:08:22.546 12:00:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.546 12:00:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.805 12:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:22.805 12:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:22.805 12:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:22.805 12:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:22.805 12:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:22.805 12:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:22.805 12:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:22.805 12:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:22.805 12:00:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.805 12:00:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.805 [2024-11-19 12:00:26.074480] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:22.805 12:00:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.805 12:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:22.805 "name": "raid_bdev1", 00:08:22.805 "aliases": [ 00:08:22.805 "28d4ca5e-2c87-4b29-b214-1a1426767510" 00:08:22.805 ], 00:08:22.805 "product_name": "Raid Volume", 00:08:22.805 "block_size": 512, 00:08:22.805 "num_blocks": 63488, 00:08:22.805 "uuid": "28d4ca5e-2c87-4b29-b214-1a1426767510", 00:08:22.805 "assigned_rate_limits": { 00:08:22.805 "rw_ios_per_sec": 0, 00:08:22.805 "rw_mbytes_per_sec": 0, 00:08:22.805 "r_mbytes_per_sec": 0, 00:08:22.805 "w_mbytes_per_sec": 0 00:08:22.805 }, 00:08:22.805 "claimed": false, 00:08:22.805 "zoned": false, 00:08:22.805 "supported_io_types": { 00:08:22.805 "read": true, 00:08:22.805 "write": true, 00:08:22.805 "unmap": false, 00:08:22.805 "flush": false, 00:08:22.805 "reset": true, 00:08:22.805 "nvme_admin": false, 00:08:22.805 "nvme_io": false, 00:08:22.805 "nvme_io_md": false, 00:08:22.805 "write_zeroes": true, 00:08:22.805 "zcopy": false, 00:08:22.805 "get_zone_info": false, 00:08:22.805 "zone_management": false, 00:08:22.805 "zone_append": false, 00:08:22.805 "compare": false, 00:08:22.805 "compare_and_write": false, 00:08:22.805 "abort": false, 00:08:22.805 "seek_hole": false, 00:08:22.805 "seek_data": false, 00:08:22.805 "copy": false, 00:08:22.805 "nvme_iov_md": false 00:08:22.805 }, 00:08:22.805 "memory_domains": [ 00:08:22.805 { 00:08:22.805 "dma_device_id": "system", 00:08:22.805 "dma_device_type": 1 00:08:22.805 }, 00:08:22.805 { 00:08:22.805 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.805 "dma_device_type": 2 00:08:22.805 }, 00:08:22.805 { 00:08:22.805 "dma_device_id": "system", 00:08:22.805 "dma_device_type": 1 00:08:22.805 }, 00:08:22.805 { 00:08:22.805 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.805 "dma_device_type": 2 00:08:22.805 } 00:08:22.805 ], 00:08:22.805 "driver_specific": { 00:08:22.805 "raid": { 00:08:22.805 "uuid": "28d4ca5e-2c87-4b29-b214-1a1426767510", 00:08:22.805 "strip_size_kb": 0, 00:08:22.805 "state": "online", 00:08:22.805 "raid_level": "raid1", 00:08:22.805 "superblock": true, 00:08:22.805 "num_base_bdevs": 2, 00:08:22.805 "num_base_bdevs_discovered": 2, 00:08:22.806 "num_base_bdevs_operational": 2, 00:08:22.806 "base_bdevs_list": [ 00:08:22.806 { 00:08:22.806 "name": "pt1", 00:08:22.806 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:22.806 "is_configured": true, 00:08:22.806 "data_offset": 2048, 00:08:22.806 "data_size": 63488 00:08:22.806 }, 00:08:22.806 { 00:08:22.806 "name": "pt2", 00:08:22.806 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:22.806 "is_configured": true, 00:08:22.806 "data_offset": 2048, 00:08:22.806 "data_size": 63488 00:08:22.806 } 00:08:22.806 ] 00:08:22.806 } 00:08:22.806 } 00:08:22.806 }' 00:08:22.806 12:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:22.806 12:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:22.806 pt2' 00:08:22.806 12:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:23.065 12:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:23.065 12:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:23.065 12:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:23.065 12:00:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.065 12:00:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.065 12:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:23.065 12:00:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.065 12:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:23.065 12:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:23.065 12:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:23.065 12:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:23.065 12:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:23.065 12:00:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.065 12:00:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.065 12:00:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.065 12:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:23.065 12:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:23.065 12:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:23.065 12:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:23.065 12:00:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.065 12:00:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.066 [2024-11-19 12:00:26.310117] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:23.066 12:00:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.066 12:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 28d4ca5e-2c87-4b29-b214-1a1426767510 '!=' 28d4ca5e-2c87-4b29-b214-1a1426767510 ']' 00:08:23.066 12:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:08:23.066 12:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:23.066 12:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:23.066 12:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:08:23.066 12:00:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.066 12:00:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.066 [2024-11-19 12:00:26.337825] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:08:23.066 12:00:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.066 12:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:23.066 12:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:23.066 12:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:23.066 12:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:23.066 12:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:23.066 12:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:23.066 12:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.066 12:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.066 12:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.066 12:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.066 12:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.066 12:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:23.066 12:00:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.066 12:00:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.066 12:00:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.066 12:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.066 "name": "raid_bdev1", 00:08:23.066 "uuid": "28d4ca5e-2c87-4b29-b214-1a1426767510", 00:08:23.066 "strip_size_kb": 0, 00:08:23.066 "state": "online", 00:08:23.066 "raid_level": "raid1", 00:08:23.066 "superblock": true, 00:08:23.066 "num_base_bdevs": 2, 00:08:23.066 "num_base_bdevs_discovered": 1, 00:08:23.066 "num_base_bdevs_operational": 1, 00:08:23.066 "base_bdevs_list": [ 00:08:23.066 { 00:08:23.066 "name": null, 00:08:23.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.066 "is_configured": false, 00:08:23.066 "data_offset": 0, 00:08:23.066 "data_size": 63488 00:08:23.066 }, 00:08:23.066 { 00:08:23.066 "name": "pt2", 00:08:23.066 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:23.066 "is_configured": true, 00:08:23.066 "data_offset": 2048, 00:08:23.066 "data_size": 63488 00:08:23.066 } 00:08:23.066 ] 00:08:23.066 }' 00:08:23.066 12:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.066 12:00:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.633 12:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:23.633 12:00:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.633 12:00:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.633 [2024-11-19 12:00:26.797062] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:23.633 [2024-11-19 12:00:26.797099] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:23.634 [2024-11-19 12:00:26.797196] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:23.634 [2024-11-19 12:00:26.797250] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:23.634 [2024-11-19 12:00:26.797263] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:23.634 12:00:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.634 12:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.634 12:00:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.634 12:00:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.634 12:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:08:23.634 12:00:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.634 12:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:08:23.634 12:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:08:23.634 12:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:08:23.634 12:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:23.634 12:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:08:23.634 12:00:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.634 12:00:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.634 12:00:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.634 12:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:08:23.634 12:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:23.634 12:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:08:23.634 12:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:08:23.634 12:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:08:23.634 12:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:23.634 12:00:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.634 12:00:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.634 [2024-11-19 12:00:26.872849] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:23.634 [2024-11-19 12:00:26.872956] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:23.634 [2024-11-19 12:00:26.873002] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:23.634 [2024-11-19 12:00:26.873034] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:23.634 [2024-11-19 12:00:26.875208] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:23.634 [2024-11-19 12:00:26.875281] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:23.634 [2024-11-19 12:00:26.875381] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:23.634 [2024-11-19 12:00:26.875455] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:23.634 [2024-11-19 12:00:26.875601] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:23.634 [2024-11-19 12:00:26.875641] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:23.634 [2024-11-19 12:00:26.875874] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:23.634 [2024-11-19 12:00:26.876063] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:23.634 [2024-11-19 12:00:26.876105] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:23.634 [2024-11-19 12:00:26.876286] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:23.634 pt2 00:08:23.634 12:00:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.634 12:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:23.634 12:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:23.634 12:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:23.634 12:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:23.634 12:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:23.634 12:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:23.634 12:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.634 12:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.634 12:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.634 12:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.634 12:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:23.634 12:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.634 12:00:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.634 12:00:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.634 12:00:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.634 12:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.634 "name": "raid_bdev1", 00:08:23.634 "uuid": "28d4ca5e-2c87-4b29-b214-1a1426767510", 00:08:23.634 "strip_size_kb": 0, 00:08:23.634 "state": "online", 00:08:23.634 "raid_level": "raid1", 00:08:23.634 "superblock": true, 00:08:23.634 "num_base_bdevs": 2, 00:08:23.634 "num_base_bdevs_discovered": 1, 00:08:23.634 "num_base_bdevs_operational": 1, 00:08:23.634 "base_bdevs_list": [ 00:08:23.634 { 00:08:23.634 "name": null, 00:08:23.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.634 "is_configured": false, 00:08:23.634 "data_offset": 2048, 00:08:23.634 "data_size": 63488 00:08:23.634 }, 00:08:23.634 { 00:08:23.634 "name": "pt2", 00:08:23.634 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:23.634 "is_configured": true, 00:08:23.634 "data_offset": 2048, 00:08:23.634 "data_size": 63488 00:08:23.634 } 00:08:23.634 ] 00:08:23.634 }' 00:08:23.634 12:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.634 12:00:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.225 12:00:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:24.225 12:00:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.225 12:00:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.225 [2024-11-19 12:00:27.316130] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:24.225 [2024-11-19 12:00:27.316164] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:24.225 [2024-11-19 12:00:27.316249] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:24.225 [2024-11-19 12:00:27.316313] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:24.225 [2024-11-19 12:00:27.316321] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:24.225 12:00:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.225 12:00:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.225 12:00:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.225 12:00:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:08:24.226 12:00:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.226 12:00:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.226 12:00:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:08:24.226 12:00:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:08:24.226 12:00:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:08:24.226 12:00:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:24.226 12:00:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.226 12:00:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.226 [2024-11-19 12:00:27.364102] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:24.226 [2024-11-19 12:00:27.364178] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:24.226 [2024-11-19 12:00:27.364196] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:08:24.226 [2024-11-19 12:00:27.364206] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:24.226 [2024-11-19 12:00:27.366436] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:24.226 [2024-11-19 12:00:27.366478] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:24.226 [2024-11-19 12:00:27.366574] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:24.226 [2024-11-19 12:00:27.366620] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:24.226 [2024-11-19 12:00:27.366744] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:08:24.226 [2024-11-19 12:00:27.366755] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:24.226 [2024-11-19 12:00:27.366771] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:08:24.226 [2024-11-19 12:00:27.366842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:24.226 [2024-11-19 12:00:27.366931] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:08:24.226 [2024-11-19 12:00:27.366940] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:24.226 [2024-11-19 12:00:27.367231] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:08:24.226 [2024-11-19 12:00:27.367482] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:08:24.226 [2024-11-19 12:00:27.367500] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:08:24.226 [2024-11-19 12:00:27.367655] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:24.226 pt1 00:08:24.226 12:00:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.226 12:00:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:08:24.226 12:00:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:24.226 12:00:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:24.226 12:00:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:24.226 12:00:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:24.226 12:00:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:24.226 12:00:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:24.226 12:00:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.226 12:00:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.226 12:00:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.226 12:00:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.226 12:00:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.226 12:00:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:24.226 12:00:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.226 12:00:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.226 12:00:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.226 12:00:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.226 "name": "raid_bdev1", 00:08:24.226 "uuid": "28d4ca5e-2c87-4b29-b214-1a1426767510", 00:08:24.226 "strip_size_kb": 0, 00:08:24.226 "state": "online", 00:08:24.226 "raid_level": "raid1", 00:08:24.226 "superblock": true, 00:08:24.226 "num_base_bdevs": 2, 00:08:24.226 "num_base_bdevs_discovered": 1, 00:08:24.226 "num_base_bdevs_operational": 1, 00:08:24.226 "base_bdevs_list": [ 00:08:24.226 { 00:08:24.226 "name": null, 00:08:24.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.226 "is_configured": false, 00:08:24.226 "data_offset": 2048, 00:08:24.226 "data_size": 63488 00:08:24.226 }, 00:08:24.226 { 00:08:24.226 "name": "pt2", 00:08:24.226 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:24.226 "is_configured": true, 00:08:24.226 "data_offset": 2048, 00:08:24.226 "data_size": 63488 00:08:24.226 } 00:08:24.226 ] 00:08:24.226 }' 00:08:24.226 12:00:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.226 12:00:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.486 12:00:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:08:24.486 12:00:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:24.486 12:00:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.486 12:00:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.486 12:00:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.745 12:00:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:08:24.745 12:00:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:08:24.745 12:00:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:24.745 12:00:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.745 12:00:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.745 [2024-11-19 12:00:27.871463] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:24.745 12:00:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.745 12:00:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 28d4ca5e-2c87-4b29-b214-1a1426767510 '!=' 28d4ca5e-2c87-4b29-b214-1a1426767510 ']' 00:08:24.745 12:00:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63258 00:08:24.745 12:00:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 63258 ']' 00:08:24.745 12:00:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 63258 00:08:24.745 12:00:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:24.745 12:00:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:24.745 12:00:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63258 00:08:24.745 killing process with pid 63258 00:08:24.745 12:00:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:24.745 12:00:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:24.745 12:00:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63258' 00:08:24.745 12:00:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 63258 00:08:24.745 [2024-11-19 12:00:27.945069] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:24.745 [2024-11-19 12:00:27.945219] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:24.745 12:00:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 63258 00:08:24.745 [2024-11-19 12:00:27.945273] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:24.745 [2024-11-19 12:00:27.945288] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:08:25.004 [2024-11-19 12:00:28.155499] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:25.947 ************************************ 00:08:25.948 END TEST raid_superblock_test 00:08:25.948 ************************************ 00:08:25.948 12:00:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:25.948 00:08:25.948 real 0m6.111s 00:08:25.948 user 0m9.103s 00:08:25.948 sys 0m1.187s 00:08:25.948 12:00:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:25.948 12:00:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.208 12:00:29 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:08:26.208 12:00:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:26.208 12:00:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:26.208 12:00:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:26.208 ************************************ 00:08:26.208 START TEST raid_read_error_test 00:08:26.208 ************************************ 00:08:26.208 12:00:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:08:26.208 12:00:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:26.208 12:00:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:26.208 12:00:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:26.208 12:00:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:26.208 12:00:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:26.208 12:00:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:26.208 12:00:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:26.208 12:00:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:26.208 12:00:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:26.208 12:00:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:26.208 12:00:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:26.208 12:00:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:26.208 12:00:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:26.208 12:00:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:26.208 12:00:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:26.208 12:00:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:26.208 12:00:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:26.208 12:00:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:26.208 12:00:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:26.208 12:00:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:26.208 12:00:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:26.208 12:00:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.sfOitdPvH5 00:08:26.208 12:00:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63588 00:08:26.208 12:00:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:26.209 12:00:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63588 00:08:26.209 12:00:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 63588 ']' 00:08:26.209 12:00:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:26.209 12:00:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:26.209 12:00:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:26.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:26.209 12:00:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:26.209 12:00:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.209 [2024-11-19 12:00:29.461070] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:08:26.209 [2024-11-19 12:00:29.461319] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63588 ] 00:08:26.467 [2024-11-19 12:00:29.625970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.467 [2024-11-19 12:00:29.744337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.726 [2024-11-19 12:00:29.948666] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:26.726 [2024-11-19 12:00:29.948815] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:26.985 12:00:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:26.985 12:00:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:26.985 12:00:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:26.985 12:00:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:26.985 12:00:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.985 12:00:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.985 BaseBdev1_malloc 00:08:26.985 12:00:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.985 12:00:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:26.985 12:00:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.985 12:00:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.985 true 00:08:26.985 12:00:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.985 12:00:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:26.985 12:00:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.985 12:00:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.985 [2024-11-19 12:00:30.352690] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:26.985 [2024-11-19 12:00:30.352754] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:26.985 [2024-11-19 12:00:30.352777] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:26.985 [2024-11-19 12:00:30.352790] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:26.985 [2024-11-19 12:00:30.355222] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:26.985 [2024-11-19 12:00:30.355266] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:26.985 BaseBdev1 00:08:26.985 12:00:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.985 12:00:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:26.985 12:00:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:26.985 12:00:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.985 12:00:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.245 BaseBdev2_malloc 00:08:27.245 12:00:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.245 12:00:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:27.245 12:00:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.245 12:00:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.245 true 00:08:27.245 12:00:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.245 12:00:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:27.245 12:00:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.245 12:00:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.245 [2024-11-19 12:00:30.418822] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:27.245 [2024-11-19 12:00:30.418885] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:27.245 [2024-11-19 12:00:30.418908] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:27.245 [2024-11-19 12:00:30.418920] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:27.245 [2024-11-19 12:00:30.421216] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:27.245 [2024-11-19 12:00:30.421259] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:27.245 BaseBdev2 00:08:27.245 12:00:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.245 12:00:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:27.245 12:00:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.245 12:00:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.245 [2024-11-19 12:00:30.430837] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:27.245 [2024-11-19 12:00:30.432708] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:27.245 [2024-11-19 12:00:30.432898] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:27.245 [2024-11-19 12:00:30.432913] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:27.245 [2024-11-19 12:00:30.433161] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:27.245 [2024-11-19 12:00:30.433350] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:27.245 [2024-11-19 12:00:30.433360] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:27.245 [2024-11-19 12:00:30.433504] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:27.245 12:00:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.245 12:00:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:27.245 12:00:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:27.245 12:00:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:27.245 12:00:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:27.245 12:00:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:27.245 12:00:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:27.245 12:00:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.245 12:00:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.245 12:00:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.245 12:00:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.245 12:00:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.245 12:00:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:27.245 12:00:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.245 12:00:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.245 12:00:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.245 12:00:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.245 "name": "raid_bdev1", 00:08:27.245 "uuid": "2c05fdea-62e4-4142-ad02-d14f73139d6d", 00:08:27.245 "strip_size_kb": 0, 00:08:27.245 "state": "online", 00:08:27.245 "raid_level": "raid1", 00:08:27.245 "superblock": true, 00:08:27.245 "num_base_bdevs": 2, 00:08:27.245 "num_base_bdevs_discovered": 2, 00:08:27.245 "num_base_bdevs_operational": 2, 00:08:27.245 "base_bdevs_list": [ 00:08:27.245 { 00:08:27.245 "name": "BaseBdev1", 00:08:27.245 "uuid": "c4c51f29-ca67-5263-8608-bae096020e31", 00:08:27.245 "is_configured": true, 00:08:27.245 "data_offset": 2048, 00:08:27.245 "data_size": 63488 00:08:27.245 }, 00:08:27.245 { 00:08:27.245 "name": "BaseBdev2", 00:08:27.245 "uuid": "dade30a0-4416-5635-81b0-a3d52b8b8571", 00:08:27.245 "is_configured": true, 00:08:27.245 "data_offset": 2048, 00:08:27.245 "data_size": 63488 00:08:27.245 } 00:08:27.245 ] 00:08:27.245 }' 00:08:27.245 12:00:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.245 12:00:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.814 12:00:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:27.814 12:00:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:27.814 [2024-11-19 12:00:31.007107] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:28.750 12:00:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:28.750 12:00:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.750 12:00:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.750 12:00:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.750 12:00:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:28.750 12:00:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:28.750 12:00:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:08:28.750 12:00:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:28.750 12:00:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:28.750 12:00:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:28.750 12:00:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:28.750 12:00:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:28.750 12:00:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:28.750 12:00:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:28.750 12:00:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.750 12:00:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.750 12:00:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.750 12:00:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.750 12:00:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.750 12:00:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:28.750 12:00:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.750 12:00:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.750 12:00:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.750 12:00:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.750 "name": "raid_bdev1", 00:08:28.750 "uuid": "2c05fdea-62e4-4142-ad02-d14f73139d6d", 00:08:28.750 "strip_size_kb": 0, 00:08:28.750 "state": "online", 00:08:28.750 "raid_level": "raid1", 00:08:28.750 "superblock": true, 00:08:28.750 "num_base_bdevs": 2, 00:08:28.750 "num_base_bdevs_discovered": 2, 00:08:28.750 "num_base_bdevs_operational": 2, 00:08:28.750 "base_bdevs_list": [ 00:08:28.750 { 00:08:28.750 "name": "BaseBdev1", 00:08:28.750 "uuid": "c4c51f29-ca67-5263-8608-bae096020e31", 00:08:28.750 "is_configured": true, 00:08:28.750 "data_offset": 2048, 00:08:28.750 "data_size": 63488 00:08:28.750 }, 00:08:28.750 { 00:08:28.750 "name": "BaseBdev2", 00:08:28.750 "uuid": "dade30a0-4416-5635-81b0-a3d52b8b8571", 00:08:28.750 "is_configured": true, 00:08:28.750 "data_offset": 2048, 00:08:28.750 "data_size": 63488 00:08:28.750 } 00:08:28.750 ] 00:08:28.750 }' 00:08:28.750 12:00:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.750 12:00:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.009 12:00:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:29.009 12:00:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.009 12:00:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.009 [2024-11-19 12:00:32.354737] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:29.009 [2024-11-19 12:00:32.354780] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:29.009 [2024-11-19 12:00:32.357534] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:29.009 [2024-11-19 12:00:32.357613] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:29.009 [2024-11-19 12:00:32.357717] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:29.009 [2024-11-19 12:00:32.357773] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:29.009 12:00:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.009 { 00:08:29.009 "results": [ 00:08:29.009 { 00:08:29.009 "job": "raid_bdev1", 00:08:29.009 "core_mask": "0x1", 00:08:29.009 "workload": "randrw", 00:08:29.009 "percentage": 50, 00:08:29.009 "status": "finished", 00:08:29.009 "queue_depth": 1, 00:08:29.009 "io_size": 131072, 00:08:29.009 "runtime": 1.348471, 00:08:29.009 "iops": 18052.297750563415, 00:08:29.009 "mibps": 2256.537218820427, 00:08:29.009 "io_failed": 0, 00:08:29.009 "io_timeout": 0, 00:08:29.009 "avg_latency_us": 52.78415799525952, 00:08:29.009 "min_latency_us": 22.91703056768559, 00:08:29.009 "max_latency_us": 1395.1441048034935 00:08:29.009 } 00:08:29.009 ], 00:08:29.009 "core_count": 1 00:08:29.009 } 00:08:29.009 12:00:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63588 00:08:29.009 12:00:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 63588 ']' 00:08:29.009 12:00:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 63588 00:08:29.009 12:00:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:29.010 12:00:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:29.010 12:00:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63588 00:08:29.269 12:00:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:29.269 12:00:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:29.269 12:00:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63588' 00:08:29.269 killing process with pid 63588 00:08:29.269 12:00:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 63588 00:08:29.269 [2024-11-19 12:00:32.405719] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:29.269 12:00:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 63588 00:08:29.269 [2024-11-19 12:00:32.550708] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:30.645 12:00:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.sfOitdPvH5 00:08:30.645 12:00:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:30.645 12:00:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:30.645 12:00:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:30.645 12:00:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:30.645 12:00:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:30.645 12:00:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:30.645 12:00:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:30.645 ************************************ 00:08:30.645 END TEST raid_read_error_test 00:08:30.645 ************************************ 00:08:30.645 00:08:30.645 real 0m4.390s 00:08:30.645 user 0m5.236s 00:08:30.645 sys 0m0.575s 00:08:30.645 12:00:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:30.645 12:00:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.645 12:00:33 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:08:30.645 12:00:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:30.645 12:00:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:30.645 12:00:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:30.645 ************************************ 00:08:30.645 START TEST raid_write_error_test 00:08:30.645 ************************************ 00:08:30.645 12:00:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:08:30.645 12:00:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:30.645 12:00:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:30.645 12:00:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:30.645 12:00:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:30.645 12:00:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:30.645 12:00:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:30.645 12:00:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:30.645 12:00:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:30.645 12:00:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:30.645 12:00:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:30.645 12:00:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:30.645 12:00:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:30.645 12:00:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:30.645 12:00:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:30.645 12:00:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:30.645 12:00:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:30.645 12:00:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:30.645 12:00:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:30.645 12:00:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:30.645 12:00:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:30.645 12:00:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:30.645 12:00:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.2dRezg9UHn 00:08:30.645 12:00:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63728 00:08:30.645 12:00:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63728 00:08:30.645 12:00:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:30.645 12:00:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 63728 ']' 00:08:30.645 12:00:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:30.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:30.645 12:00:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:30.645 12:00:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:30.645 12:00:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:30.645 12:00:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.645 [2024-11-19 12:00:33.954558] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:08:30.646 [2024-11-19 12:00:33.954701] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63728 ] 00:08:30.905 [2024-11-19 12:00:34.139703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.905 [2024-11-19 12:00:34.260147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.164 [2024-11-19 12:00:34.471931] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:31.164 [2024-11-19 12:00:34.472013] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:31.733 12:00:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:31.733 12:00:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:31.733 12:00:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:31.733 12:00:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:31.733 12:00:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.733 12:00:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.733 BaseBdev1_malloc 00:08:31.733 12:00:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.733 12:00:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:31.733 12:00:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.733 12:00:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.733 true 00:08:31.733 12:00:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.733 12:00:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:31.733 12:00:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.733 12:00:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.733 [2024-11-19 12:00:34.872794] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:31.733 [2024-11-19 12:00:34.872862] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:31.733 [2024-11-19 12:00:34.872886] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:31.733 [2024-11-19 12:00:34.872898] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:31.733 [2024-11-19 12:00:34.875088] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:31.733 [2024-11-19 12:00:34.875223] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:31.733 BaseBdev1 00:08:31.733 12:00:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.733 12:00:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:31.733 12:00:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:31.733 12:00:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.733 12:00:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.733 BaseBdev2_malloc 00:08:31.733 12:00:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.733 12:00:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:31.733 12:00:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.733 12:00:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.733 true 00:08:31.733 12:00:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.733 12:00:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:31.733 12:00:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.733 12:00:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.733 [2024-11-19 12:00:34.938847] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:31.733 [2024-11-19 12:00:34.938900] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:31.733 [2024-11-19 12:00:34.938918] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:31.733 [2024-11-19 12:00:34.938928] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:31.733 [2024-11-19 12:00:34.941047] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:31.733 [2024-11-19 12:00:34.941088] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:31.733 BaseBdev2 00:08:31.733 12:00:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.733 12:00:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:31.733 12:00:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.733 12:00:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.733 [2024-11-19 12:00:34.950880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:31.733 [2024-11-19 12:00:34.952631] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:31.733 [2024-11-19 12:00:34.952904] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:31.733 [2024-11-19 12:00:34.952926] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:31.733 [2024-11-19 12:00:34.953171] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:31.734 [2024-11-19 12:00:34.953351] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:31.734 [2024-11-19 12:00:34.953372] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:31.734 [2024-11-19 12:00:34.953511] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:31.734 12:00:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.734 12:00:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:31.734 12:00:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:31.734 12:00:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:31.734 12:00:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:31.734 12:00:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:31.734 12:00:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:31.734 12:00:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.734 12:00:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.734 12:00:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.734 12:00:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.734 12:00:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.734 12:00:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:31.734 12:00:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.734 12:00:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.734 12:00:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.734 12:00:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:31.734 "name": "raid_bdev1", 00:08:31.734 "uuid": "ff178ecb-1566-47ee-a5b7-4bb4357b70c6", 00:08:31.734 "strip_size_kb": 0, 00:08:31.734 "state": "online", 00:08:31.734 "raid_level": "raid1", 00:08:31.734 "superblock": true, 00:08:31.734 "num_base_bdevs": 2, 00:08:31.734 "num_base_bdevs_discovered": 2, 00:08:31.734 "num_base_bdevs_operational": 2, 00:08:31.734 "base_bdevs_list": [ 00:08:31.734 { 00:08:31.734 "name": "BaseBdev1", 00:08:31.734 "uuid": "6c63c090-9158-5718-9005-510023faec65", 00:08:31.734 "is_configured": true, 00:08:31.734 "data_offset": 2048, 00:08:31.734 "data_size": 63488 00:08:31.734 }, 00:08:31.734 { 00:08:31.734 "name": "BaseBdev2", 00:08:31.734 "uuid": "cc0a65e3-6edb-53ff-9f86-21b072e33856", 00:08:31.734 "is_configured": true, 00:08:31.734 "data_offset": 2048, 00:08:31.734 "data_size": 63488 00:08:31.734 } 00:08:31.734 ] 00:08:31.734 }' 00:08:31.734 12:00:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:31.734 12:00:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.300 12:00:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:32.300 12:00:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:32.300 [2024-11-19 12:00:35.487404] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:33.238 12:00:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:33.238 12:00:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.238 12:00:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.238 [2024-11-19 12:00:36.399070] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:08:33.238 [2024-11-19 12:00:36.399226] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:33.238 [2024-11-19 12:00:36.399460] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:08:33.238 12:00:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.238 12:00:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:33.238 12:00:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:33.238 12:00:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:08:33.238 12:00:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:08:33.238 12:00:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:33.238 12:00:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:33.238 12:00:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:33.238 12:00:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:33.238 12:00:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:33.238 12:00:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:33.238 12:00:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.238 12:00:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.238 12:00:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.238 12:00:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.238 12:00:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.238 12:00:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:33.238 12:00:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.238 12:00:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.238 12:00:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.238 12:00:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.238 "name": "raid_bdev1", 00:08:33.238 "uuid": "ff178ecb-1566-47ee-a5b7-4bb4357b70c6", 00:08:33.238 "strip_size_kb": 0, 00:08:33.238 "state": "online", 00:08:33.238 "raid_level": "raid1", 00:08:33.238 "superblock": true, 00:08:33.238 "num_base_bdevs": 2, 00:08:33.238 "num_base_bdevs_discovered": 1, 00:08:33.238 "num_base_bdevs_operational": 1, 00:08:33.238 "base_bdevs_list": [ 00:08:33.238 { 00:08:33.238 "name": null, 00:08:33.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.238 "is_configured": false, 00:08:33.238 "data_offset": 0, 00:08:33.238 "data_size": 63488 00:08:33.238 }, 00:08:33.238 { 00:08:33.238 "name": "BaseBdev2", 00:08:33.238 "uuid": "cc0a65e3-6edb-53ff-9f86-21b072e33856", 00:08:33.238 "is_configured": true, 00:08:33.238 "data_offset": 2048, 00:08:33.238 "data_size": 63488 00:08:33.238 } 00:08:33.238 ] 00:08:33.238 }' 00:08:33.238 12:00:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.238 12:00:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.498 12:00:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:33.498 12:00:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.498 12:00:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.498 [2024-11-19 12:00:36.864258] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:33.498 [2024-11-19 12:00:36.864303] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:33.498 [2024-11-19 12:00:36.867258] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:33.498 [2024-11-19 12:00:36.867306] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:33.498 [2024-11-19 12:00:36.867379] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:33.498 [2024-11-19 12:00:36.867390] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:33.498 { 00:08:33.498 "results": [ 00:08:33.498 { 00:08:33.498 "job": "raid_bdev1", 00:08:33.498 "core_mask": "0x1", 00:08:33.498 "workload": "randrw", 00:08:33.498 "percentage": 50, 00:08:33.498 "status": "finished", 00:08:33.498 "queue_depth": 1, 00:08:33.498 "io_size": 131072, 00:08:33.498 "runtime": 1.377458, 00:08:33.498 "iops": 20594.457326466578, 00:08:33.498 "mibps": 2574.307165808322, 00:08:33.498 "io_failed": 0, 00:08:33.498 "io_timeout": 0, 00:08:33.498 "avg_latency_us": 45.869291187314815, 00:08:33.498 "min_latency_us": 23.02882096069869, 00:08:33.498 "max_latency_us": 1509.6174672489083 00:08:33.498 } 00:08:33.498 ], 00:08:33.498 "core_count": 1 00:08:33.498 } 00:08:33.498 12:00:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.498 12:00:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63728 00:08:33.498 12:00:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 63728 ']' 00:08:33.498 12:00:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 63728 00:08:33.757 12:00:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:33.757 12:00:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:33.757 12:00:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63728 00:08:33.757 12:00:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:33.757 12:00:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:33.757 12:00:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63728' 00:08:33.757 killing process with pid 63728 00:08:33.757 12:00:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 63728 00:08:33.757 [2024-11-19 12:00:36.911424] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:33.757 12:00:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 63728 00:08:33.757 [2024-11-19 12:00:37.051904] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:35.144 12:00:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.2dRezg9UHn 00:08:35.144 12:00:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:35.144 12:00:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:35.144 12:00:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:35.144 12:00:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:35.144 12:00:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:35.144 12:00:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:35.144 ************************************ 00:08:35.144 END TEST raid_write_error_test 00:08:35.144 ************************************ 00:08:35.144 12:00:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:35.144 00:08:35.144 real 0m4.413s 00:08:35.144 user 0m5.284s 00:08:35.144 sys 0m0.604s 00:08:35.144 12:00:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:35.144 12:00:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.144 12:00:38 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:35.144 12:00:38 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:35.144 12:00:38 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:08:35.144 12:00:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:35.144 12:00:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:35.144 12:00:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:35.145 ************************************ 00:08:35.145 START TEST raid_state_function_test 00:08:35.145 ************************************ 00:08:35.145 12:00:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:08:35.145 12:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:35.145 12:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:35.145 12:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:35.145 12:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:35.145 12:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:35.145 12:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:35.145 12:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:35.145 12:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:35.145 12:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:35.145 12:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:35.145 12:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:35.145 12:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:35.145 12:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:35.145 12:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:35.145 12:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:35.145 12:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:35.145 12:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:35.145 12:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:35.145 12:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:35.145 12:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:35.145 12:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:35.145 12:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:35.145 12:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:35.145 12:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:35.145 12:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:35.145 12:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:35.145 Process raid pid: 63872 00:08:35.145 12:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63872 00:08:35.145 12:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63872' 00:08:35.145 12:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63872 00:08:35.145 12:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:35.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:35.145 12:00:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 63872 ']' 00:08:35.145 12:00:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:35.145 12:00:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:35.145 12:00:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:35.145 12:00:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:35.145 12:00:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.145 [2024-11-19 12:00:38.385470] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:08:35.145 [2024-11-19 12:00:38.385675] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:35.404 [2024-11-19 12:00:38.545273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.404 [2024-11-19 12:00:38.667855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.663 [2024-11-19 12:00:38.892624] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:35.663 [2024-11-19 12:00:38.892761] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:35.922 12:00:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:35.922 12:00:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:35.922 12:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:35.922 12:00:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.922 12:00:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.922 [2024-11-19 12:00:39.235213] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:35.922 [2024-11-19 12:00:39.235362] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:35.922 [2024-11-19 12:00:39.235392] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:35.922 [2024-11-19 12:00:39.235416] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:35.922 [2024-11-19 12:00:39.235436] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:35.922 [2024-11-19 12:00:39.235457] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:35.922 12:00:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.922 12:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:35.922 12:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:35.922 12:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:35.922 12:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:35.922 12:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:35.922 12:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:35.922 12:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.923 12:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.923 12:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.923 12:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.923 12:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.923 12:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:35.923 12:00:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.923 12:00:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.923 12:00:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.923 12:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.923 "name": "Existed_Raid", 00:08:35.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.923 "strip_size_kb": 64, 00:08:35.923 "state": "configuring", 00:08:35.923 "raid_level": "raid0", 00:08:35.923 "superblock": false, 00:08:35.923 "num_base_bdevs": 3, 00:08:35.923 "num_base_bdevs_discovered": 0, 00:08:35.923 "num_base_bdevs_operational": 3, 00:08:35.923 "base_bdevs_list": [ 00:08:35.923 { 00:08:35.923 "name": "BaseBdev1", 00:08:35.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.923 "is_configured": false, 00:08:35.923 "data_offset": 0, 00:08:35.923 "data_size": 0 00:08:35.923 }, 00:08:35.923 { 00:08:35.923 "name": "BaseBdev2", 00:08:35.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.923 "is_configured": false, 00:08:35.923 "data_offset": 0, 00:08:35.923 "data_size": 0 00:08:35.923 }, 00:08:35.923 { 00:08:35.923 "name": "BaseBdev3", 00:08:35.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.923 "is_configured": false, 00:08:35.923 "data_offset": 0, 00:08:35.923 "data_size": 0 00:08:35.923 } 00:08:35.923 ] 00:08:35.923 }' 00:08:35.923 12:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.923 12:00:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.491 12:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:36.491 12:00:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.491 12:00:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.491 [2024-11-19 12:00:39.614559] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:36.491 [2024-11-19 12:00:39.614684] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:36.491 12:00:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.491 12:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:36.491 12:00:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.491 12:00:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.491 [2024-11-19 12:00:39.626525] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:36.491 [2024-11-19 12:00:39.626620] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:36.491 [2024-11-19 12:00:39.626649] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:36.492 [2024-11-19 12:00:39.626672] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:36.492 [2024-11-19 12:00:39.626690] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:36.492 [2024-11-19 12:00:39.626712] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:36.492 12:00:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.492 12:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:36.492 12:00:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.492 12:00:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.492 [2024-11-19 12:00:39.675410] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:36.492 BaseBdev1 00:08:36.492 12:00:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.492 12:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:36.492 12:00:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:36.492 12:00:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:36.492 12:00:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:36.492 12:00:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:36.492 12:00:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:36.492 12:00:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:36.492 12:00:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.492 12:00:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.492 12:00:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.492 12:00:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:36.492 12:00:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.492 12:00:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.492 [ 00:08:36.492 { 00:08:36.492 "name": "BaseBdev1", 00:08:36.492 "aliases": [ 00:08:36.492 "052bcecd-c868-46b1-8813-919c9697b5b8" 00:08:36.492 ], 00:08:36.492 "product_name": "Malloc disk", 00:08:36.492 "block_size": 512, 00:08:36.492 "num_blocks": 65536, 00:08:36.492 "uuid": "052bcecd-c868-46b1-8813-919c9697b5b8", 00:08:36.492 "assigned_rate_limits": { 00:08:36.492 "rw_ios_per_sec": 0, 00:08:36.492 "rw_mbytes_per_sec": 0, 00:08:36.492 "r_mbytes_per_sec": 0, 00:08:36.492 "w_mbytes_per_sec": 0 00:08:36.492 }, 00:08:36.492 "claimed": true, 00:08:36.492 "claim_type": "exclusive_write", 00:08:36.492 "zoned": false, 00:08:36.492 "supported_io_types": { 00:08:36.492 "read": true, 00:08:36.492 "write": true, 00:08:36.492 "unmap": true, 00:08:36.492 "flush": true, 00:08:36.492 "reset": true, 00:08:36.492 "nvme_admin": false, 00:08:36.492 "nvme_io": false, 00:08:36.492 "nvme_io_md": false, 00:08:36.492 "write_zeroes": true, 00:08:36.492 "zcopy": true, 00:08:36.492 "get_zone_info": false, 00:08:36.492 "zone_management": false, 00:08:36.492 "zone_append": false, 00:08:36.492 "compare": false, 00:08:36.492 "compare_and_write": false, 00:08:36.492 "abort": true, 00:08:36.492 "seek_hole": false, 00:08:36.492 "seek_data": false, 00:08:36.492 "copy": true, 00:08:36.492 "nvme_iov_md": false 00:08:36.492 }, 00:08:36.492 "memory_domains": [ 00:08:36.492 { 00:08:36.492 "dma_device_id": "system", 00:08:36.492 "dma_device_type": 1 00:08:36.492 }, 00:08:36.492 { 00:08:36.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.492 "dma_device_type": 2 00:08:36.492 } 00:08:36.492 ], 00:08:36.492 "driver_specific": {} 00:08:36.492 } 00:08:36.492 ] 00:08:36.492 12:00:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.492 12:00:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:36.492 12:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:36.492 12:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:36.492 12:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:36.492 12:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:36.492 12:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:36.492 12:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:36.492 12:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.492 12:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.492 12:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.492 12:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.492 12:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.492 12:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:36.492 12:00:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.492 12:00:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.492 12:00:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.492 12:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.492 "name": "Existed_Raid", 00:08:36.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:36.492 "strip_size_kb": 64, 00:08:36.492 "state": "configuring", 00:08:36.492 "raid_level": "raid0", 00:08:36.492 "superblock": false, 00:08:36.492 "num_base_bdevs": 3, 00:08:36.492 "num_base_bdevs_discovered": 1, 00:08:36.492 "num_base_bdevs_operational": 3, 00:08:36.492 "base_bdevs_list": [ 00:08:36.492 { 00:08:36.492 "name": "BaseBdev1", 00:08:36.492 "uuid": "052bcecd-c868-46b1-8813-919c9697b5b8", 00:08:36.492 "is_configured": true, 00:08:36.492 "data_offset": 0, 00:08:36.492 "data_size": 65536 00:08:36.492 }, 00:08:36.492 { 00:08:36.492 "name": "BaseBdev2", 00:08:36.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:36.492 "is_configured": false, 00:08:36.492 "data_offset": 0, 00:08:36.492 "data_size": 0 00:08:36.492 }, 00:08:36.492 { 00:08:36.492 "name": "BaseBdev3", 00:08:36.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:36.492 "is_configured": false, 00:08:36.492 "data_offset": 0, 00:08:36.492 "data_size": 0 00:08:36.492 } 00:08:36.492 ] 00:08:36.492 }' 00:08:36.492 12:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.492 12:00:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.061 12:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:37.061 12:00:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.061 12:00:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.061 [2024-11-19 12:00:40.174747] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:37.061 [2024-11-19 12:00:40.174811] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:37.061 12:00:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.061 12:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:37.061 12:00:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.061 12:00:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.061 [2024-11-19 12:00:40.182765] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:37.061 [2024-11-19 12:00:40.184593] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:37.061 [2024-11-19 12:00:40.184674] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:37.061 [2024-11-19 12:00:40.184703] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:37.061 [2024-11-19 12:00:40.184725] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:37.061 12:00:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.061 12:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:37.061 12:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:37.061 12:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:37.061 12:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:37.061 12:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:37.061 12:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:37.061 12:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:37.061 12:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:37.061 12:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.061 12:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.061 12:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.061 12:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.061 12:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.061 12:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:37.061 12:00:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.061 12:00:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.061 12:00:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.061 12:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.061 "name": "Existed_Raid", 00:08:37.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.061 "strip_size_kb": 64, 00:08:37.061 "state": "configuring", 00:08:37.061 "raid_level": "raid0", 00:08:37.061 "superblock": false, 00:08:37.061 "num_base_bdevs": 3, 00:08:37.061 "num_base_bdevs_discovered": 1, 00:08:37.061 "num_base_bdevs_operational": 3, 00:08:37.061 "base_bdevs_list": [ 00:08:37.061 { 00:08:37.061 "name": "BaseBdev1", 00:08:37.061 "uuid": "052bcecd-c868-46b1-8813-919c9697b5b8", 00:08:37.061 "is_configured": true, 00:08:37.061 "data_offset": 0, 00:08:37.061 "data_size": 65536 00:08:37.061 }, 00:08:37.061 { 00:08:37.061 "name": "BaseBdev2", 00:08:37.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.061 "is_configured": false, 00:08:37.061 "data_offset": 0, 00:08:37.061 "data_size": 0 00:08:37.061 }, 00:08:37.061 { 00:08:37.061 "name": "BaseBdev3", 00:08:37.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.061 "is_configured": false, 00:08:37.062 "data_offset": 0, 00:08:37.062 "data_size": 0 00:08:37.062 } 00:08:37.062 ] 00:08:37.062 }' 00:08:37.062 12:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.062 12:00:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.327 12:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:37.327 12:00:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.327 12:00:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.327 [2024-11-19 12:00:40.675849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:37.327 BaseBdev2 00:08:37.327 12:00:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.327 12:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:37.327 12:00:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:37.327 12:00:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:37.327 12:00:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:37.327 12:00:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:37.327 12:00:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:37.327 12:00:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:37.327 12:00:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.327 12:00:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.327 12:00:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.327 12:00:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:37.327 12:00:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.327 12:00:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.596 [ 00:08:37.596 { 00:08:37.596 "name": "BaseBdev2", 00:08:37.596 "aliases": [ 00:08:37.596 "16b5b1f0-5a6a-4940-a2b5-b371389a6e00" 00:08:37.596 ], 00:08:37.596 "product_name": "Malloc disk", 00:08:37.596 "block_size": 512, 00:08:37.596 "num_blocks": 65536, 00:08:37.596 "uuid": "16b5b1f0-5a6a-4940-a2b5-b371389a6e00", 00:08:37.596 "assigned_rate_limits": { 00:08:37.596 "rw_ios_per_sec": 0, 00:08:37.596 "rw_mbytes_per_sec": 0, 00:08:37.596 "r_mbytes_per_sec": 0, 00:08:37.596 "w_mbytes_per_sec": 0 00:08:37.596 }, 00:08:37.596 "claimed": true, 00:08:37.596 "claim_type": "exclusive_write", 00:08:37.596 "zoned": false, 00:08:37.596 "supported_io_types": { 00:08:37.596 "read": true, 00:08:37.596 "write": true, 00:08:37.596 "unmap": true, 00:08:37.596 "flush": true, 00:08:37.596 "reset": true, 00:08:37.596 "nvme_admin": false, 00:08:37.596 "nvme_io": false, 00:08:37.596 "nvme_io_md": false, 00:08:37.596 "write_zeroes": true, 00:08:37.596 "zcopy": true, 00:08:37.596 "get_zone_info": false, 00:08:37.596 "zone_management": false, 00:08:37.596 "zone_append": false, 00:08:37.596 "compare": false, 00:08:37.596 "compare_and_write": false, 00:08:37.596 "abort": true, 00:08:37.596 "seek_hole": false, 00:08:37.596 "seek_data": false, 00:08:37.596 "copy": true, 00:08:37.596 "nvme_iov_md": false 00:08:37.596 }, 00:08:37.596 "memory_domains": [ 00:08:37.596 { 00:08:37.597 "dma_device_id": "system", 00:08:37.597 "dma_device_type": 1 00:08:37.597 }, 00:08:37.597 { 00:08:37.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:37.597 "dma_device_type": 2 00:08:37.597 } 00:08:37.597 ], 00:08:37.597 "driver_specific": {} 00:08:37.597 } 00:08:37.597 ] 00:08:37.597 12:00:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.597 12:00:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:37.597 12:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:37.597 12:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:37.597 12:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:37.597 12:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:37.597 12:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:37.597 12:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:37.597 12:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:37.597 12:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:37.597 12:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.597 12:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.597 12:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.597 12:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.597 12:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:37.597 12:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.597 12:00:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.597 12:00:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.597 12:00:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.597 12:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.597 "name": "Existed_Raid", 00:08:37.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.597 "strip_size_kb": 64, 00:08:37.597 "state": "configuring", 00:08:37.597 "raid_level": "raid0", 00:08:37.597 "superblock": false, 00:08:37.597 "num_base_bdevs": 3, 00:08:37.597 "num_base_bdevs_discovered": 2, 00:08:37.597 "num_base_bdevs_operational": 3, 00:08:37.597 "base_bdevs_list": [ 00:08:37.597 { 00:08:37.597 "name": "BaseBdev1", 00:08:37.597 "uuid": "052bcecd-c868-46b1-8813-919c9697b5b8", 00:08:37.597 "is_configured": true, 00:08:37.597 "data_offset": 0, 00:08:37.597 "data_size": 65536 00:08:37.597 }, 00:08:37.597 { 00:08:37.597 "name": "BaseBdev2", 00:08:37.597 "uuid": "16b5b1f0-5a6a-4940-a2b5-b371389a6e00", 00:08:37.597 "is_configured": true, 00:08:37.597 "data_offset": 0, 00:08:37.597 "data_size": 65536 00:08:37.597 }, 00:08:37.597 { 00:08:37.597 "name": "BaseBdev3", 00:08:37.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.597 "is_configured": false, 00:08:37.597 "data_offset": 0, 00:08:37.597 "data_size": 0 00:08:37.597 } 00:08:37.597 ] 00:08:37.597 }' 00:08:37.597 12:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.597 12:00:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.856 12:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:37.856 12:00:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.856 12:00:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.856 [2024-11-19 12:00:41.163407] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:37.856 [2024-11-19 12:00:41.163539] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:37.856 [2024-11-19 12:00:41.163572] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:37.856 [2024-11-19 12:00:41.163864] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:37.856 [2024-11-19 12:00:41.164080] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:37.856 [2024-11-19 12:00:41.164127] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:37.856 [2024-11-19 12:00:41.164414] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:37.856 BaseBdev3 00:08:37.856 12:00:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.856 12:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:37.856 12:00:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:37.856 12:00:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:37.856 12:00:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:37.856 12:00:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:37.856 12:00:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:37.856 12:00:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:37.856 12:00:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.856 12:00:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.856 12:00:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.856 12:00:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:37.856 12:00:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.856 12:00:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.856 [ 00:08:37.856 { 00:08:37.856 "name": "BaseBdev3", 00:08:37.856 "aliases": [ 00:08:37.856 "ee37f079-5038-416c-af9d-dbfa2a1411e8" 00:08:37.856 ], 00:08:37.856 "product_name": "Malloc disk", 00:08:37.856 "block_size": 512, 00:08:37.856 "num_blocks": 65536, 00:08:37.856 "uuid": "ee37f079-5038-416c-af9d-dbfa2a1411e8", 00:08:37.856 "assigned_rate_limits": { 00:08:37.856 "rw_ios_per_sec": 0, 00:08:37.856 "rw_mbytes_per_sec": 0, 00:08:37.856 "r_mbytes_per_sec": 0, 00:08:37.856 "w_mbytes_per_sec": 0 00:08:37.856 }, 00:08:37.856 "claimed": true, 00:08:37.856 "claim_type": "exclusive_write", 00:08:37.856 "zoned": false, 00:08:37.856 "supported_io_types": { 00:08:37.856 "read": true, 00:08:37.856 "write": true, 00:08:37.856 "unmap": true, 00:08:37.856 "flush": true, 00:08:37.856 "reset": true, 00:08:37.856 "nvme_admin": false, 00:08:37.856 "nvme_io": false, 00:08:37.856 "nvme_io_md": false, 00:08:37.856 "write_zeroes": true, 00:08:37.856 "zcopy": true, 00:08:37.856 "get_zone_info": false, 00:08:37.856 "zone_management": false, 00:08:37.856 "zone_append": false, 00:08:37.856 "compare": false, 00:08:37.856 "compare_and_write": false, 00:08:37.856 "abort": true, 00:08:37.856 "seek_hole": false, 00:08:37.856 "seek_data": false, 00:08:37.856 "copy": true, 00:08:37.856 "nvme_iov_md": false 00:08:37.856 }, 00:08:37.856 "memory_domains": [ 00:08:37.856 { 00:08:37.856 "dma_device_id": "system", 00:08:37.856 "dma_device_type": 1 00:08:37.856 }, 00:08:37.856 { 00:08:37.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:37.856 "dma_device_type": 2 00:08:37.856 } 00:08:37.856 ], 00:08:37.856 "driver_specific": {} 00:08:37.856 } 00:08:37.856 ] 00:08:37.856 12:00:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.856 12:00:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:37.856 12:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:37.856 12:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:37.856 12:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:37.856 12:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:37.857 12:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:37.857 12:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:37.857 12:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:37.857 12:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:37.857 12:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.857 12:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.857 12:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.857 12:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.857 12:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.857 12:00:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.857 12:00:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.857 12:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:37.857 12:00:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.115 12:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.115 "name": "Existed_Raid", 00:08:38.115 "uuid": "e927aab9-38ae-431a-be05-65453533a090", 00:08:38.115 "strip_size_kb": 64, 00:08:38.115 "state": "online", 00:08:38.115 "raid_level": "raid0", 00:08:38.115 "superblock": false, 00:08:38.115 "num_base_bdevs": 3, 00:08:38.115 "num_base_bdevs_discovered": 3, 00:08:38.115 "num_base_bdevs_operational": 3, 00:08:38.115 "base_bdevs_list": [ 00:08:38.115 { 00:08:38.115 "name": "BaseBdev1", 00:08:38.115 "uuid": "052bcecd-c868-46b1-8813-919c9697b5b8", 00:08:38.115 "is_configured": true, 00:08:38.115 "data_offset": 0, 00:08:38.115 "data_size": 65536 00:08:38.115 }, 00:08:38.115 { 00:08:38.115 "name": "BaseBdev2", 00:08:38.115 "uuid": "16b5b1f0-5a6a-4940-a2b5-b371389a6e00", 00:08:38.115 "is_configured": true, 00:08:38.116 "data_offset": 0, 00:08:38.116 "data_size": 65536 00:08:38.116 }, 00:08:38.116 { 00:08:38.116 "name": "BaseBdev3", 00:08:38.116 "uuid": "ee37f079-5038-416c-af9d-dbfa2a1411e8", 00:08:38.116 "is_configured": true, 00:08:38.116 "data_offset": 0, 00:08:38.116 "data_size": 65536 00:08:38.116 } 00:08:38.116 ] 00:08:38.116 }' 00:08:38.116 12:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.116 12:00:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.373 12:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:38.373 12:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:38.373 12:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:38.373 12:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:38.374 12:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:38.374 12:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:38.374 12:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:38.374 12:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:38.374 12:00:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.374 12:00:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.374 [2024-11-19 12:00:41.662960] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:38.374 12:00:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.374 12:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:38.374 "name": "Existed_Raid", 00:08:38.374 "aliases": [ 00:08:38.374 "e927aab9-38ae-431a-be05-65453533a090" 00:08:38.374 ], 00:08:38.374 "product_name": "Raid Volume", 00:08:38.374 "block_size": 512, 00:08:38.374 "num_blocks": 196608, 00:08:38.374 "uuid": "e927aab9-38ae-431a-be05-65453533a090", 00:08:38.374 "assigned_rate_limits": { 00:08:38.374 "rw_ios_per_sec": 0, 00:08:38.374 "rw_mbytes_per_sec": 0, 00:08:38.374 "r_mbytes_per_sec": 0, 00:08:38.374 "w_mbytes_per_sec": 0 00:08:38.374 }, 00:08:38.374 "claimed": false, 00:08:38.374 "zoned": false, 00:08:38.374 "supported_io_types": { 00:08:38.374 "read": true, 00:08:38.374 "write": true, 00:08:38.374 "unmap": true, 00:08:38.374 "flush": true, 00:08:38.374 "reset": true, 00:08:38.374 "nvme_admin": false, 00:08:38.374 "nvme_io": false, 00:08:38.374 "nvme_io_md": false, 00:08:38.374 "write_zeroes": true, 00:08:38.374 "zcopy": false, 00:08:38.374 "get_zone_info": false, 00:08:38.374 "zone_management": false, 00:08:38.374 "zone_append": false, 00:08:38.374 "compare": false, 00:08:38.374 "compare_and_write": false, 00:08:38.374 "abort": false, 00:08:38.374 "seek_hole": false, 00:08:38.374 "seek_data": false, 00:08:38.374 "copy": false, 00:08:38.374 "nvme_iov_md": false 00:08:38.374 }, 00:08:38.374 "memory_domains": [ 00:08:38.374 { 00:08:38.374 "dma_device_id": "system", 00:08:38.374 "dma_device_type": 1 00:08:38.374 }, 00:08:38.374 { 00:08:38.374 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:38.374 "dma_device_type": 2 00:08:38.374 }, 00:08:38.374 { 00:08:38.374 "dma_device_id": "system", 00:08:38.374 "dma_device_type": 1 00:08:38.374 }, 00:08:38.374 { 00:08:38.374 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:38.374 "dma_device_type": 2 00:08:38.374 }, 00:08:38.374 { 00:08:38.374 "dma_device_id": "system", 00:08:38.374 "dma_device_type": 1 00:08:38.374 }, 00:08:38.374 { 00:08:38.374 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:38.374 "dma_device_type": 2 00:08:38.374 } 00:08:38.374 ], 00:08:38.374 "driver_specific": { 00:08:38.374 "raid": { 00:08:38.374 "uuid": "e927aab9-38ae-431a-be05-65453533a090", 00:08:38.374 "strip_size_kb": 64, 00:08:38.374 "state": "online", 00:08:38.374 "raid_level": "raid0", 00:08:38.374 "superblock": false, 00:08:38.374 "num_base_bdevs": 3, 00:08:38.374 "num_base_bdevs_discovered": 3, 00:08:38.374 "num_base_bdevs_operational": 3, 00:08:38.374 "base_bdevs_list": [ 00:08:38.374 { 00:08:38.374 "name": "BaseBdev1", 00:08:38.374 "uuid": "052bcecd-c868-46b1-8813-919c9697b5b8", 00:08:38.374 "is_configured": true, 00:08:38.374 "data_offset": 0, 00:08:38.374 "data_size": 65536 00:08:38.374 }, 00:08:38.374 { 00:08:38.374 "name": "BaseBdev2", 00:08:38.374 "uuid": "16b5b1f0-5a6a-4940-a2b5-b371389a6e00", 00:08:38.374 "is_configured": true, 00:08:38.374 "data_offset": 0, 00:08:38.374 "data_size": 65536 00:08:38.374 }, 00:08:38.374 { 00:08:38.374 "name": "BaseBdev3", 00:08:38.374 "uuid": "ee37f079-5038-416c-af9d-dbfa2a1411e8", 00:08:38.374 "is_configured": true, 00:08:38.374 "data_offset": 0, 00:08:38.374 "data_size": 65536 00:08:38.374 } 00:08:38.374 ] 00:08:38.374 } 00:08:38.374 } 00:08:38.374 }' 00:08:38.374 12:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:38.631 12:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:38.631 BaseBdev2 00:08:38.631 BaseBdev3' 00:08:38.631 12:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:38.631 12:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:38.631 12:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:38.631 12:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:38.632 12:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:38.632 12:00:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.632 12:00:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.632 12:00:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.632 12:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:38.632 12:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:38.632 12:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:38.632 12:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:38.632 12:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:38.632 12:00:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.632 12:00:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.632 12:00:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.632 12:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:38.632 12:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:38.632 12:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:38.632 12:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:38.632 12:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:38.632 12:00:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.632 12:00:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.632 12:00:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.632 12:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:38.632 12:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:38.632 12:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:38.632 12:00:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.632 12:00:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.632 [2024-11-19 12:00:41.930231] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:38.632 [2024-11-19 12:00:41.930316] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:38.632 [2024-11-19 12:00:41.930379] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:38.891 12:00:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.891 12:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:38.891 12:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:38.891 12:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:38.891 12:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:38.891 12:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:38.891 12:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:38.891 12:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.891 12:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:38.891 12:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:38.891 12:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:38.891 12:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:38.891 12:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.891 12:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.891 12:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.891 12:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.891 12:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.891 12:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.891 12:00:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.891 12:00:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.891 12:00:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.891 12:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.891 "name": "Existed_Raid", 00:08:38.891 "uuid": "e927aab9-38ae-431a-be05-65453533a090", 00:08:38.891 "strip_size_kb": 64, 00:08:38.891 "state": "offline", 00:08:38.891 "raid_level": "raid0", 00:08:38.891 "superblock": false, 00:08:38.891 "num_base_bdevs": 3, 00:08:38.891 "num_base_bdevs_discovered": 2, 00:08:38.891 "num_base_bdevs_operational": 2, 00:08:38.891 "base_bdevs_list": [ 00:08:38.891 { 00:08:38.891 "name": null, 00:08:38.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.891 "is_configured": false, 00:08:38.891 "data_offset": 0, 00:08:38.891 "data_size": 65536 00:08:38.891 }, 00:08:38.891 { 00:08:38.891 "name": "BaseBdev2", 00:08:38.891 "uuid": "16b5b1f0-5a6a-4940-a2b5-b371389a6e00", 00:08:38.891 "is_configured": true, 00:08:38.891 "data_offset": 0, 00:08:38.891 "data_size": 65536 00:08:38.891 }, 00:08:38.891 { 00:08:38.891 "name": "BaseBdev3", 00:08:38.891 "uuid": "ee37f079-5038-416c-af9d-dbfa2a1411e8", 00:08:38.891 "is_configured": true, 00:08:38.891 "data_offset": 0, 00:08:38.891 "data_size": 65536 00:08:38.891 } 00:08:38.891 ] 00:08:38.891 }' 00:08:38.891 12:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.891 12:00:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.150 12:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:39.150 12:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:39.150 12:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.150 12:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:39.150 12:00:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.150 12:00:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.150 12:00:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.409 12:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:39.409 12:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:39.409 12:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:39.409 12:00:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.409 12:00:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.409 [2024-11-19 12:00:42.541851] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:39.409 12:00:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.409 12:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:39.409 12:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:39.409 12:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.409 12:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:39.409 12:00:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.409 12:00:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.409 12:00:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.409 12:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:39.409 12:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:39.409 12:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:39.409 12:00:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.409 12:00:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.409 [2024-11-19 12:00:42.707036] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:39.410 [2024-11-19 12:00:42.707115] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:39.669 12:00:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.669 12:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:39.669 12:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:39.669 12:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.669 12:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:39.669 12:00:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.669 12:00:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.669 12:00:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.669 12:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:39.669 12:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:39.669 12:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:39.669 12:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:39.669 12:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:39.669 12:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:39.669 12:00:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.669 12:00:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.669 BaseBdev2 00:08:39.669 12:00:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.669 12:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:39.669 12:00:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:39.669 12:00:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:39.669 12:00:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:39.669 12:00:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:39.669 12:00:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:39.669 12:00:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:39.669 12:00:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.669 12:00:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.669 12:00:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.669 12:00:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:39.669 12:00:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.669 12:00:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.669 [ 00:08:39.669 { 00:08:39.669 "name": "BaseBdev2", 00:08:39.669 "aliases": [ 00:08:39.669 "bf6282b5-4264-4076-a7f3-a3b628f8a62a" 00:08:39.669 ], 00:08:39.669 "product_name": "Malloc disk", 00:08:39.669 "block_size": 512, 00:08:39.669 "num_blocks": 65536, 00:08:39.669 "uuid": "bf6282b5-4264-4076-a7f3-a3b628f8a62a", 00:08:39.669 "assigned_rate_limits": { 00:08:39.669 "rw_ios_per_sec": 0, 00:08:39.669 "rw_mbytes_per_sec": 0, 00:08:39.669 "r_mbytes_per_sec": 0, 00:08:39.669 "w_mbytes_per_sec": 0 00:08:39.669 }, 00:08:39.669 "claimed": false, 00:08:39.669 "zoned": false, 00:08:39.669 "supported_io_types": { 00:08:39.669 "read": true, 00:08:39.669 "write": true, 00:08:39.669 "unmap": true, 00:08:39.669 "flush": true, 00:08:39.669 "reset": true, 00:08:39.669 "nvme_admin": false, 00:08:39.669 "nvme_io": false, 00:08:39.669 "nvme_io_md": false, 00:08:39.669 "write_zeroes": true, 00:08:39.669 "zcopy": true, 00:08:39.669 "get_zone_info": false, 00:08:39.669 "zone_management": false, 00:08:39.669 "zone_append": false, 00:08:39.669 "compare": false, 00:08:39.669 "compare_and_write": false, 00:08:39.669 "abort": true, 00:08:39.669 "seek_hole": false, 00:08:39.669 "seek_data": false, 00:08:39.669 "copy": true, 00:08:39.669 "nvme_iov_md": false 00:08:39.669 }, 00:08:39.669 "memory_domains": [ 00:08:39.669 { 00:08:39.669 "dma_device_id": "system", 00:08:39.669 "dma_device_type": 1 00:08:39.669 }, 00:08:39.669 { 00:08:39.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:39.669 "dma_device_type": 2 00:08:39.669 } 00:08:39.669 ], 00:08:39.669 "driver_specific": {} 00:08:39.669 } 00:08:39.669 ] 00:08:39.669 12:00:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.669 12:00:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:39.669 12:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:39.669 12:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:39.669 12:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:39.669 12:00:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.669 12:00:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.669 BaseBdev3 00:08:39.669 12:00:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.669 12:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:39.669 12:00:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:39.669 12:00:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:39.669 12:00:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:39.669 12:00:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:39.669 12:00:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:39.669 12:00:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:39.669 12:00:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.669 12:00:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.670 12:00:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.670 12:00:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:39.670 12:00:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.670 12:00:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.670 [ 00:08:39.670 { 00:08:39.670 "name": "BaseBdev3", 00:08:39.670 "aliases": [ 00:08:39.670 "5506952d-3a23-475e-91d4-2b968ba98ed0" 00:08:39.670 ], 00:08:39.670 "product_name": "Malloc disk", 00:08:39.670 "block_size": 512, 00:08:39.670 "num_blocks": 65536, 00:08:39.670 "uuid": "5506952d-3a23-475e-91d4-2b968ba98ed0", 00:08:39.670 "assigned_rate_limits": { 00:08:39.670 "rw_ios_per_sec": 0, 00:08:39.670 "rw_mbytes_per_sec": 0, 00:08:39.670 "r_mbytes_per_sec": 0, 00:08:39.670 "w_mbytes_per_sec": 0 00:08:39.670 }, 00:08:39.670 "claimed": false, 00:08:39.670 "zoned": false, 00:08:39.670 "supported_io_types": { 00:08:39.670 "read": true, 00:08:39.670 "write": true, 00:08:39.670 "unmap": true, 00:08:39.670 "flush": true, 00:08:39.670 "reset": true, 00:08:39.670 "nvme_admin": false, 00:08:39.670 "nvme_io": false, 00:08:39.670 "nvme_io_md": false, 00:08:39.670 "write_zeroes": true, 00:08:39.670 "zcopy": true, 00:08:39.670 "get_zone_info": false, 00:08:39.670 "zone_management": false, 00:08:39.670 "zone_append": false, 00:08:39.670 "compare": false, 00:08:39.670 "compare_and_write": false, 00:08:39.670 "abort": true, 00:08:39.670 "seek_hole": false, 00:08:39.670 "seek_data": false, 00:08:39.670 "copy": true, 00:08:39.670 "nvme_iov_md": false 00:08:39.670 }, 00:08:39.670 "memory_domains": [ 00:08:39.670 { 00:08:39.670 "dma_device_id": "system", 00:08:39.670 "dma_device_type": 1 00:08:39.670 }, 00:08:39.670 { 00:08:39.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:39.670 "dma_device_type": 2 00:08:39.670 } 00:08:39.670 ], 00:08:39.670 "driver_specific": {} 00:08:39.670 } 00:08:39.670 ] 00:08:39.670 12:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.670 12:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:39.670 12:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:39.670 12:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:39.670 12:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:39.670 12:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.670 12:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.670 [2024-11-19 12:00:43.021025] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:39.670 [2024-11-19 12:00:43.021164] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:39.670 [2024-11-19 12:00:43.021205] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:39.670 [2024-11-19 12:00:43.022926] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:39.670 12:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.670 12:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:39.670 12:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:39.670 12:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:39.670 12:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:39.670 12:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:39.670 12:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:39.670 12:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.670 12:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.670 12:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.670 12:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.670 12:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.670 12:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.670 12:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.670 12:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.929 12:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.929 12:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.929 "name": "Existed_Raid", 00:08:39.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.929 "strip_size_kb": 64, 00:08:39.929 "state": "configuring", 00:08:39.929 "raid_level": "raid0", 00:08:39.929 "superblock": false, 00:08:39.929 "num_base_bdevs": 3, 00:08:39.929 "num_base_bdevs_discovered": 2, 00:08:39.929 "num_base_bdevs_operational": 3, 00:08:39.929 "base_bdevs_list": [ 00:08:39.929 { 00:08:39.929 "name": "BaseBdev1", 00:08:39.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.929 "is_configured": false, 00:08:39.929 "data_offset": 0, 00:08:39.929 "data_size": 0 00:08:39.929 }, 00:08:39.929 { 00:08:39.929 "name": "BaseBdev2", 00:08:39.929 "uuid": "bf6282b5-4264-4076-a7f3-a3b628f8a62a", 00:08:39.929 "is_configured": true, 00:08:39.929 "data_offset": 0, 00:08:39.930 "data_size": 65536 00:08:39.930 }, 00:08:39.930 { 00:08:39.930 "name": "BaseBdev3", 00:08:39.930 "uuid": "5506952d-3a23-475e-91d4-2b968ba98ed0", 00:08:39.930 "is_configured": true, 00:08:39.930 "data_offset": 0, 00:08:39.930 "data_size": 65536 00:08:39.930 } 00:08:39.930 ] 00:08:39.930 }' 00:08:39.930 12:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.930 12:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.189 12:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:40.189 12:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.189 12:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.189 [2024-11-19 12:00:43.456327] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:40.189 12:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.189 12:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:40.189 12:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:40.189 12:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:40.189 12:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:40.189 12:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:40.189 12:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:40.189 12:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.189 12:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.189 12:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.189 12:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.189 12:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.189 12:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.189 12:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.189 12:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:40.189 12:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.189 12:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.189 "name": "Existed_Raid", 00:08:40.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.189 "strip_size_kb": 64, 00:08:40.189 "state": "configuring", 00:08:40.189 "raid_level": "raid0", 00:08:40.189 "superblock": false, 00:08:40.189 "num_base_bdevs": 3, 00:08:40.189 "num_base_bdevs_discovered": 1, 00:08:40.189 "num_base_bdevs_operational": 3, 00:08:40.189 "base_bdevs_list": [ 00:08:40.189 { 00:08:40.189 "name": "BaseBdev1", 00:08:40.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.189 "is_configured": false, 00:08:40.189 "data_offset": 0, 00:08:40.189 "data_size": 0 00:08:40.189 }, 00:08:40.189 { 00:08:40.189 "name": null, 00:08:40.189 "uuid": "bf6282b5-4264-4076-a7f3-a3b628f8a62a", 00:08:40.189 "is_configured": false, 00:08:40.189 "data_offset": 0, 00:08:40.189 "data_size": 65536 00:08:40.189 }, 00:08:40.189 { 00:08:40.189 "name": "BaseBdev3", 00:08:40.189 "uuid": "5506952d-3a23-475e-91d4-2b968ba98ed0", 00:08:40.189 "is_configured": true, 00:08:40.189 "data_offset": 0, 00:08:40.189 "data_size": 65536 00:08:40.189 } 00:08:40.189 ] 00:08:40.189 }' 00:08:40.189 12:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.189 12:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.758 12:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:40.758 12:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.758 12:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.758 12:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.758 12:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.758 12:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:40.758 12:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:40.758 12:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.758 12:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.758 [2024-11-19 12:00:43.981496] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:40.758 BaseBdev1 00:08:40.758 12:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.758 12:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:40.758 12:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:40.758 12:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:40.758 12:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:40.758 12:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:40.758 12:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:40.758 12:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:40.758 12:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.758 12:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.758 12:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.758 12:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:40.758 12:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.758 12:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.758 [ 00:08:40.758 { 00:08:40.758 "name": "BaseBdev1", 00:08:40.758 "aliases": [ 00:08:40.758 "722bd462-3d8e-464a-b405-011f36cfb2f3" 00:08:40.758 ], 00:08:40.758 "product_name": "Malloc disk", 00:08:40.758 "block_size": 512, 00:08:40.758 "num_blocks": 65536, 00:08:40.758 "uuid": "722bd462-3d8e-464a-b405-011f36cfb2f3", 00:08:40.758 "assigned_rate_limits": { 00:08:40.758 "rw_ios_per_sec": 0, 00:08:40.758 "rw_mbytes_per_sec": 0, 00:08:40.758 "r_mbytes_per_sec": 0, 00:08:40.758 "w_mbytes_per_sec": 0 00:08:40.758 }, 00:08:40.758 "claimed": true, 00:08:40.758 "claim_type": "exclusive_write", 00:08:40.758 "zoned": false, 00:08:40.758 "supported_io_types": { 00:08:40.758 "read": true, 00:08:40.758 "write": true, 00:08:40.758 "unmap": true, 00:08:40.758 "flush": true, 00:08:40.758 "reset": true, 00:08:40.758 "nvme_admin": false, 00:08:40.758 "nvme_io": false, 00:08:40.758 "nvme_io_md": false, 00:08:40.758 "write_zeroes": true, 00:08:40.758 "zcopy": true, 00:08:40.758 "get_zone_info": false, 00:08:40.758 "zone_management": false, 00:08:40.758 "zone_append": false, 00:08:40.758 "compare": false, 00:08:40.758 "compare_and_write": false, 00:08:40.758 "abort": true, 00:08:40.758 "seek_hole": false, 00:08:40.758 "seek_data": false, 00:08:40.758 "copy": true, 00:08:40.758 "nvme_iov_md": false 00:08:40.758 }, 00:08:40.758 "memory_domains": [ 00:08:40.758 { 00:08:40.758 "dma_device_id": "system", 00:08:40.758 "dma_device_type": 1 00:08:40.758 }, 00:08:40.758 { 00:08:40.758 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.758 "dma_device_type": 2 00:08:40.758 } 00:08:40.758 ], 00:08:40.758 "driver_specific": {} 00:08:40.758 } 00:08:40.758 ] 00:08:40.758 12:00:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.758 12:00:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:40.758 12:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:40.758 12:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:40.758 12:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:40.758 12:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:40.758 12:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:40.758 12:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:40.758 12:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.758 12:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.758 12:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.758 12:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.758 12:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:40.758 12:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.758 12:00:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.758 12:00:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.758 12:00:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.758 12:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.758 "name": "Existed_Raid", 00:08:40.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.758 "strip_size_kb": 64, 00:08:40.758 "state": "configuring", 00:08:40.758 "raid_level": "raid0", 00:08:40.758 "superblock": false, 00:08:40.758 "num_base_bdevs": 3, 00:08:40.758 "num_base_bdevs_discovered": 2, 00:08:40.758 "num_base_bdevs_operational": 3, 00:08:40.758 "base_bdevs_list": [ 00:08:40.758 { 00:08:40.758 "name": "BaseBdev1", 00:08:40.758 "uuid": "722bd462-3d8e-464a-b405-011f36cfb2f3", 00:08:40.758 "is_configured": true, 00:08:40.758 "data_offset": 0, 00:08:40.758 "data_size": 65536 00:08:40.758 }, 00:08:40.759 { 00:08:40.759 "name": null, 00:08:40.759 "uuid": "bf6282b5-4264-4076-a7f3-a3b628f8a62a", 00:08:40.759 "is_configured": false, 00:08:40.759 "data_offset": 0, 00:08:40.759 "data_size": 65536 00:08:40.759 }, 00:08:40.759 { 00:08:40.759 "name": "BaseBdev3", 00:08:40.759 "uuid": "5506952d-3a23-475e-91d4-2b968ba98ed0", 00:08:40.759 "is_configured": true, 00:08:40.759 "data_offset": 0, 00:08:40.759 "data_size": 65536 00:08:40.759 } 00:08:40.759 ] 00:08:40.759 }' 00:08:40.759 12:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.759 12:00:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.326 12:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.326 12:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:41.326 12:00:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.326 12:00:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.326 12:00:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.326 12:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:41.326 12:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:41.326 12:00:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.326 12:00:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.326 [2024-11-19 12:00:44.500701] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:41.326 12:00:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.326 12:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:41.326 12:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:41.326 12:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:41.326 12:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:41.326 12:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:41.326 12:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:41.326 12:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.326 12:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.326 12:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.326 12:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.326 12:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.326 12:00:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.326 12:00:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.326 12:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:41.326 12:00:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.326 12:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.326 "name": "Existed_Raid", 00:08:41.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.326 "strip_size_kb": 64, 00:08:41.326 "state": "configuring", 00:08:41.326 "raid_level": "raid0", 00:08:41.326 "superblock": false, 00:08:41.326 "num_base_bdevs": 3, 00:08:41.326 "num_base_bdevs_discovered": 1, 00:08:41.326 "num_base_bdevs_operational": 3, 00:08:41.326 "base_bdevs_list": [ 00:08:41.326 { 00:08:41.326 "name": "BaseBdev1", 00:08:41.326 "uuid": "722bd462-3d8e-464a-b405-011f36cfb2f3", 00:08:41.326 "is_configured": true, 00:08:41.326 "data_offset": 0, 00:08:41.326 "data_size": 65536 00:08:41.326 }, 00:08:41.326 { 00:08:41.326 "name": null, 00:08:41.326 "uuid": "bf6282b5-4264-4076-a7f3-a3b628f8a62a", 00:08:41.326 "is_configured": false, 00:08:41.326 "data_offset": 0, 00:08:41.326 "data_size": 65536 00:08:41.326 }, 00:08:41.326 { 00:08:41.326 "name": null, 00:08:41.326 "uuid": "5506952d-3a23-475e-91d4-2b968ba98ed0", 00:08:41.326 "is_configured": false, 00:08:41.326 "data_offset": 0, 00:08:41.326 "data_size": 65536 00:08:41.326 } 00:08:41.326 ] 00:08:41.326 }' 00:08:41.326 12:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.326 12:00:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.585 12:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:41.585 12:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.585 12:00:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.585 12:00:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.585 12:00:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.844 12:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:41.844 12:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:41.844 12:00:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.844 12:00:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.844 [2024-11-19 12:00:44.975938] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:41.844 12:00:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.844 12:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:41.844 12:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:41.844 12:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:41.844 12:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:41.844 12:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:41.844 12:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:41.844 12:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.844 12:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.844 12:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.844 12:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.844 12:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.844 12:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:41.844 12:00:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.844 12:00:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.844 12:00:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.844 12:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.844 "name": "Existed_Raid", 00:08:41.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.844 "strip_size_kb": 64, 00:08:41.844 "state": "configuring", 00:08:41.844 "raid_level": "raid0", 00:08:41.844 "superblock": false, 00:08:41.844 "num_base_bdevs": 3, 00:08:41.844 "num_base_bdevs_discovered": 2, 00:08:41.844 "num_base_bdevs_operational": 3, 00:08:41.844 "base_bdevs_list": [ 00:08:41.844 { 00:08:41.844 "name": "BaseBdev1", 00:08:41.844 "uuid": "722bd462-3d8e-464a-b405-011f36cfb2f3", 00:08:41.844 "is_configured": true, 00:08:41.844 "data_offset": 0, 00:08:41.844 "data_size": 65536 00:08:41.844 }, 00:08:41.844 { 00:08:41.844 "name": null, 00:08:41.844 "uuid": "bf6282b5-4264-4076-a7f3-a3b628f8a62a", 00:08:41.844 "is_configured": false, 00:08:41.845 "data_offset": 0, 00:08:41.845 "data_size": 65536 00:08:41.845 }, 00:08:41.845 { 00:08:41.845 "name": "BaseBdev3", 00:08:41.845 "uuid": "5506952d-3a23-475e-91d4-2b968ba98ed0", 00:08:41.845 "is_configured": true, 00:08:41.845 "data_offset": 0, 00:08:41.845 "data_size": 65536 00:08:41.845 } 00:08:41.845 ] 00:08:41.845 }' 00:08:41.845 12:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.845 12:00:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.104 12:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.104 12:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:42.104 12:00:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.104 12:00:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.104 12:00:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.104 12:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:42.104 12:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:42.105 12:00:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.105 12:00:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.105 [2024-11-19 12:00:45.439198] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:42.364 12:00:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.364 12:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:42.364 12:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:42.364 12:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:42.364 12:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:42.364 12:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:42.364 12:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:42.364 12:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.364 12:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.364 12:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.364 12:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.364 12:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:42.364 12:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.364 12:00:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.364 12:00:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.364 12:00:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.364 12:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.364 "name": "Existed_Raid", 00:08:42.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.364 "strip_size_kb": 64, 00:08:42.364 "state": "configuring", 00:08:42.364 "raid_level": "raid0", 00:08:42.364 "superblock": false, 00:08:42.364 "num_base_bdevs": 3, 00:08:42.364 "num_base_bdevs_discovered": 1, 00:08:42.364 "num_base_bdevs_operational": 3, 00:08:42.364 "base_bdevs_list": [ 00:08:42.364 { 00:08:42.364 "name": null, 00:08:42.364 "uuid": "722bd462-3d8e-464a-b405-011f36cfb2f3", 00:08:42.364 "is_configured": false, 00:08:42.364 "data_offset": 0, 00:08:42.364 "data_size": 65536 00:08:42.364 }, 00:08:42.364 { 00:08:42.364 "name": null, 00:08:42.364 "uuid": "bf6282b5-4264-4076-a7f3-a3b628f8a62a", 00:08:42.364 "is_configured": false, 00:08:42.364 "data_offset": 0, 00:08:42.364 "data_size": 65536 00:08:42.364 }, 00:08:42.364 { 00:08:42.364 "name": "BaseBdev3", 00:08:42.364 "uuid": "5506952d-3a23-475e-91d4-2b968ba98ed0", 00:08:42.364 "is_configured": true, 00:08:42.364 "data_offset": 0, 00:08:42.364 "data_size": 65536 00:08:42.364 } 00:08:42.364 ] 00:08:42.364 }' 00:08:42.364 12:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.364 12:00:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.623 12:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.623 12:00:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.623 12:00:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.623 12:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:42.623 12:00:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.624 12:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:42.624 12:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:42.624 12:00:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.624 12:00:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.624 [2024-11-19 12:00:45.998553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:42.883 12:00:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.883 12:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:42.883 12:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:42.883 12:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:42.883 12:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:42.883 12:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:42.883 12:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:42.883 12:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.883 12:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.883 12:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.883 12:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.883 12:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.883 12:00:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.883 12:00:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.884 12:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:42.884 12:00:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.884 12:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.884 "name": "Existed_Raid", 00:08:42.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.884 "strip_size_kb": 64, 00:08:42.884 "state": "configuring", 00:08:42.884 "raid_level": "raid0", 00:08:42.884 "superblock": false, 00:08:42.884 "num_base_bdevs": 3, 00:08:42.884 "num_base_bdevs_discovered": 2, 00:08:42.884 "num_base_bdevs_operational": 3, 00:08:42.884 "base_bdevs_list": [ 00:08:42.884 { 00:08:42.884 "name": null, 00:08:42.884 "uuid": "722bd462-3d8e-464a-b405-011f36cfb2f3", 00:08:42.884 "is_configured": false, 00:08:42.884 "data_offset": 0, 00:08:42.884 "data_size": 65536 00:08:42.884 }, 00:08:42.884 { 00:08:42.884 "name": "BaseBdev2", 00:08:42.884 "uuid": "bf6282b5-4264-4076-a7f3-a3b628f8a62a", 00:08:42.884 "is_configured": true, 00:08:42.884 "data_offset": 0, 00:08:42.884 "data_size": 65536 00:08:42.884 }, 00:08:42.884 { 00:08:42.884 "name": "BaseBdev3", 00:08:42.884 "uuid": "5506952d-3a23-475e-91d4-2b968ba98ed0", 00:08:42.884 "is_configured": true, 00:08:42.884 "data_offset": 0, 00:08:42.884 "data_size": 65536 00:08:42.884 } 00:08:42.884 ] 00:08:42.884 }' 00:08:42.884 12:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.884 12:00:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.143 12:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.143 12:00:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.143 12:00:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.143 12:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:43.143 12:00:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.143 12:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:43.143 12:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.143 12:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:43.143 12:00:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.143 12:00:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.403 12:00:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.403 12:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 722bd462-3d8e-464a-b405-011f36cfb2f3 00:08:43.403 12:00:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.403 12:00:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.403 [2024-11-19 12:00:46.600986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:43.403 [2024-11-19 12:00:46.601092] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:43.403 [2024-11-19 12:00:46.601119] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:43.403 [2024-11-19 12:00:46.601374] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:43.403 [2024-11-19 12:00:46.601550] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:43.403 [2024-11-19 12:00:46.601589] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:43.403 [2024-11-19 12:00:46.601861] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:43.403 NewBaseBdev 00:08:43.403 12:00:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.403 12:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:43.403 12:00:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:43.403 12:00:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:43.403 12:00:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:43.403 12:00:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:43.403 12:00:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:43.403 12:00:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:43.403 12:00:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.403 12:00:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.403 12:00:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.403 12:00:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:43.403 12:00:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.403 12:00:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.403 [ 00:08:43.403 { 00:08:43.403 "name": "NewBaseBdev", 00:08:43.403 "aliases": [ 00:08:43.403 "722bd462-3d8e-464a-b405-011f36cfb2f3" 00:08:43.403 ], 00:08:43.403 "product_name": "Malloc disk", 00:08:43.403 "block_size": 512, 00:08:43.403 "num_blocks": 65536, 00:08:43.403 "uuid": "722bd462-3d8e-464a-b405-011f36cfb2f3", 00:08:43.403 "assigned_rate_limits": { 00:08:43.403 "rw_ios_per_sec": 0, 00:08:43.403 "rw_mbytes_per_sec": 0, 00:08:43.403 "r_mbytes_per_sec": 0, 00:08:43.403 "w_mbytes_per_sec": 0 00:08:43.403 }, 00:08:43.403 "claimed": true, 00:08:43.403 "claim_type": "exclusive_write", 00:08:43.403 "zoned": false, 00:08:43.403 "supported_io_types": { 00:08:43.403 "read": true, 00:08:43.403 "write": true, 00:08:43.403 "unmap": true, 00:08:43.403 "flush": true, 00:08:43.403 "reset": true, 00:08:43.403 "nvme_admin": false, 00:08:43.403 "nvme_io": false, 00:08:43.403 "nvme_io_md": false, 00:08:43.403 "write_zeroes": true, 00:08:43.403 "zcopy": true, 00:08:43.403 "get_zone_info": false, 00:08:43.403 "zone_management": false, 00:08:43.403 "zone_append": false, 00:08:43.403 "compare": false, 00:08:43.403 "compare_and_write": false, 00:08:43.403 "abort": true, 00:08:43.403 "seek_hole": false, 00:08:43.403 "seek_data": false, 00:08:43.403 "copy": true, 00:08:43.403 "nvme_iov_md": false 00:08:43.403 }, 00:08:43.403 "memory_domains": [ 00:08:43.403 { 00:08:43.403 "dma_device_id": "system", 00:08:43.403 "dma_device_type": 1 00:08:43.403 }, 00:08:43.403 { 00:08:43.403 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.403 "dma_device_type": 2 00:08:43.403 } 00:08:43.403 ], 00:08:43.403 "driver_specific": {} 00:08:43.403 } 00:08:43.403 ] 00:08:43.403 12:00:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.403 12:00:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:43.403 12:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:43.403 12:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:43.403 12:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:43.403 12:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:43.403 12:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:43.403 12:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:43.403 12:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.403 12:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.403 12:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.403 12:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.403 12:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.403 12:00:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.403 12:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:43.403 12:00:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.403 12:00:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.403 12:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.403 "name": "Existed_Raid", 00:08:43.403 "uuid": "a686330b-79df-43dd-8640-dd20850974aa", 00:08:43.403 "strip_size_kb": 64, 00:08:43.403 "state": "online", 00:08:43.403 "raid_level": "raid0", 00:08:43.403 "superblock": false, 00:08:43.403 "num_base_bdevs": 3, 00:08:43.403 "num_base_bdevs_discovered": 3, 00:08:43.403 "num_base_bdevs_operational": 3, 00:08:43.403 "base_bdevs_list": [ 00:08:43.403 { 00:08:43.403 "name": "NewBaseBdev", 00:08:43.403 "uuid": "722bd462-3d8e-464a-b405-011f36cfb2f3", 00:08:43.403 "is_configured": true, 00:08:43.403 "data_offset": 0, 00:08:43.403 "data_size": 65536 00:08:43.403 }, 00:08:43.403 { 00:08:43.403 "name": "BaseBdev2", 00:08:43.403 "uuid": "bf6282b5-4264-4076-a7f3-a3b628f8a62a", 00:08:43.403 "is_configured": true, 00:08:43.403 "data_offset": 0, 00:08:43.403 "data_size": 65536 00:08:43.403 }, 00:08:43.403 { 00:08:43.403 "name": "BaseBdev3", 00:08:43.403 "uuid": "5506952d-3a23-475e-91d4-2b968ba98ed0", 00:08:43.403 "is_configured": true, 00:08:43.403 "data_offset": 0, 00:08:43.403 "data_size": 65536 00:08:43.403 } 00:08:43.403 ] 00:08:43.403 }' 00:08:43.403 12:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.403 12:00:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.663 12:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:43.663 12:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:43.663 12:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:43.663 12:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:43.663 12:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:43.663 12:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:43.663 12:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:43.663 12:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:43.663 12:00:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.663 12:00:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.663 [2024-11-19 12:00:47.036599] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:43.923 12:00:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.923 12:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:43.923 "name": "Existed_Raid", 00:08:43.923 "aliases": [ 00:08:43.923 "a686330b-79df-43dd-8640-dd20850974aa" 00:08:43.923 ], 00:08:43.923 "product_name": "Raid Volume", 00:08:43.923 "block_size": 512, 00:08:43.923 "num_blocks": 196608, 00:08:43.923 "uuid": "a686330b-79df-43dd-8640-dd20850974aa", 00:08:43.923 "assigned_rate_limits": { 00:08:43.923 "rw_ios_per_sec": 0, 00:08:43.923 "rw_mbytes_per_sec": 0, 00:08:43.923 "r_mbytes_per_sec": 0, 00:08:43.923 "w_mbytes_per_sec": 0 00:08:43.923 }, 00:08:43.923 "claimed": false, 00:08:43.923 "zoned": false, 00:08:43.923 "supported_io_types": { 00:08:43.923 "read": true, 00:08:43.923 "write": true, 00:08:43.923 "unmap": true, 00:08:43.923 "flush": true, 00:08:43.923 "reset": true, 00:08:43.923 "nvme_admin": false, 00:08:43.923 "nvme_io": false, 00:08:43.923 "nvme_io_md": false, 00:08:43.923 "write_zeroes": true, 00:08:43.923 "zcopy": false, 00:08:43.923 "get_zone_info": false, 00:08:43.923 "zone_management": false, 00:08:43.923 "zone_append": false, 00:08:43.923 "compare": false, 00:08:43.923 "compare_and_write": false, 00:08:43.923 "abort": false, 00:08:43.923 "seek_hole": false, 00:08:43.923 "seek_data": false, 00:08:43.923 "copy": false, 00:08:43.923 "nvme_iov_md": false 00:08:43.923 }, 00:08:43.923 "memory_domains": [ 00:08:43.923 { 00:08:43.923 "dma_device_id": "system", 00:08:43.923 "dma_device_type": 1 00:08:43.923 }, 00:08:43.923 { 00:08:43.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.923 "dma_device_type": 2 00:08:43.923 }, 00:08:43.923 { 00:08:43.923 "dma_device_id": "system", 00:08:43.923 "dma_device_type": 1 00:08:43.923 }, 00:08:43.923 { 00:08:43.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.923 "dma_device_type": 2 00:08:43.923 }, 00:08:43.923 { 00:08:43.923 "dma_device_id": "system", 00:08:43.923 "dma_device_type": 1 00:08:43.923 }, 00:08:43.923 { 00:08:43.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.923 "dma_device_type": 2 00:08:43.923 } 00:08:43.923 ], 00:08:43.923 "driver_specific": { 00:08:43.923 "raid": { 00:08:43.923 "uuid": "a686330b-79df-43dd-8640-dd20850974aa", 00:08:43.923 "strip_size_kb": 64, 00:08:43.923 "state": "online", 00:08:43.923 "raid_level": "raid0", 00:08:43.923 "superblock": false, 00:08:43.923 "num_base_bdevs": 3, 00:08:43.923 "num_base_bdevs_discovered": 3, 00:08:43.923 "num_base_bdevs_operational": 3, 00:08:43.923 "base_bdevs_list": [ 00:08:43.923 { 00:08:43.923 "name": "NewBaseBdev", 00:08:43.923 "uuid": "722bd462-3d8e-464a-b405-011f36cfb2f3", 00:08:43.923 "is_configured": true, 00:08:43.923 "data_offset": 0, 00:08:43.923 "data_size": 65536 00:08:43.923 }, 00:08:43.923 { 00:08:43.923 "name": "BaseBdev2", 00:08:43.923 "uuid": "bf6282b5-4264-4076-a7f3-a3b628f8a62a", 00:08:43.923 "is_configured": true, 00:08:43.923 "data_offset": 0, 00:08:43.923 "data_size": 65536 00:08:43.923 }, 00:08:43.923 { 00:08:43.923 "name": "BaseBdev3", 00:08:43.923 "uuid": "5506952d-3a23-475e-91d4-2b968ba98ed0", 00:08:43.923 "is_configured": true, 00:08:43.923 "data_offset": 0, 00:08:43.923 "data_size": 65536 00:08:43.923 } 00:08:43.923 ] 00:08:43.923 } 00:08:43.923 } 00:08:43.923 }' 00:08:43.923 12:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:43.923 12:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:43.923 BaseBdev2 00:08:43.923 BaseBdev3' 00:08:43.923 12:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:43.924 12:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:43.924 12:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:43.924 12:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:43.924 12:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:43.924 12:00:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.924 12:00:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.924 12:00:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.924 12:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:43.924 12:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:43.924 12:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:43.924 12:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:43.924 12:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:43.924 12:00:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.924 12:00:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.924 12:00:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.924 12:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:43.924 12:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:43.924 12:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:43.924 12:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:43.924 12:00:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.924 12:00:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.924 12:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:43.924 12:00:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.924 12:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:43.924 12:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:43.924 12:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:43.924 12:00:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.924 12:00:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.924 [2024-11-19 12:00:47.295831] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:43.924 [2024-11-19 12:00:47.295861] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:43.924 [2024-11-19 12:00:47.295945] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:43.924 [2024-11-19 12:00:47.296000] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:43.924 [2024-11-19 12:00:47.296023] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:44.183 12:00:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.184 12:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63872 00:08:44.184 12:00:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 63872 ']' 00:08:44.184 12:00:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 63872 00:08:44.184 12:00:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:44.184 12:00:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:44.184 12:00:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63872 00:08:44.184 killing process with pid 63872 00:08:44.184 12:00:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:44.184 12:00:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:44.184 12:00:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63872' 00:08:44.184 12:00:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 63872 00:08:44.184 [2024-11-19 12:00:47.346500] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:44.184 12:00:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 63872 00:08:44.443 [2024-11-19 12:00:47.645703] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:45.385 12:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:45.385 00:08:45.385 real 0m10.469s 00:08:45.385 user 0m16.554s 00:08:45.385 sys 0m1.874s 00:08:45.385 12:00:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:45.385 ************************************ 00:08:45.385 END TEST raid_state_function_test 00:08:45.385 ************************************ 00:08:45.385 12:00:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.645 12:00:48 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:08:45.645 12:00:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:45.645 12:00:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:45.645 12:00:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:45.645 ************************************ 00:08:45.645 START TEST raid_state_function_test_sb 00:08:45.645 ************************************ 00:08:45.645 12:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:08:45.645 12:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:45.645 12:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:45.645 12:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:45.645 12:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:45.645 12:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:45.645 12:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:45.645 12:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:45.645 12:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:45.645 12:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:45.645 12:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:45.645 12:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:45.645 12:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:45.645 12:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:45.645 12:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:45.645 12:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:45.645 12:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:45.645 12:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:45.645 12:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:45.645 12:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:45.645 12:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:45.645 12:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:45.645 12:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:45.645 12:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:45.645 12:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:45.645 12:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:45.645 12:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:45.645 12:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64494 00:08:45.645 12:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:45.645 12:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64494' 00:08:45.645 Process raid pid: 64494 00:08:45.645 12:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64494 00:08:45.645 12:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 64494 ']' 00:08:45.645 12:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.645 12:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:45.645 12:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.645 12:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:45.645 12:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.645 [2024-11-19 12:00:48.923868] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:08:45.645 [2024-11-19 12:00:48.924097] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:45.905 [2024-11-19 12:00:49.081333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.905 [2024-11-19 12:00:49.199458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.164 [2024-11-19 12:00:49.400240] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:46.164 [2024-11-19 12:00:49.400337] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:46.423 12:00:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:46.423 12:00:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:46.423 12:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:46.423 12:00:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.423 12:00:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.423 [2024-11-19 12:00:49.764586] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:46.423 [2024-11-19 12:00:49.764639] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:46.423 [2024-11-19 12:00:49.764650] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:46.423 [2024-11-19 12:00:49.764659] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:46.423 [2024-11-19 12:00:49.764666] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:46.423 [2024-11-19 12:00:49.764674] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:46.423 12:00:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.423 12:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:46.423 12:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:46.423 12:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:46.423 12:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:46.423 12:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:46.423 12:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:46.423 12:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.423 12:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.423 12:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.423 12:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.423 12:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.423 12:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:46.423 12:00:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.423 12:00:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.423 12:00:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.683 12:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.683 "name": "Existed_Raid", 00:08:46.683 "uuid": "ec9570f3-8fc4-4e25-b43f-025407ab1b2b", 00:08:46.683 "strip_size_kb": 64, 00:08:46.683 "state": "configuring", 00:08:46.683 "raid_level": "raid0", 00:08:46.683 "superblock": true, 00:08:46.683 "num_base_bdevs": 3, 00:08:46.683 "num_base_bdevs_discovered": 0, 00:08:46.684 "num_base_bdevs_operational": 3, 00:08:46.684 "base_bdevs_list": [ 00:08:46.684 { 00:08:46.684 "name": "BaseBdev1", 00:08:46.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.684 "is_configured": false, 00:08:46.684 "data_offset": 0, 00:08:46.684 "data_size": 0 00:08:46.684 }, 00:08:46.684 { 00:08:46.684 "name": "BaseBdev2", 00:08:46.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.684 "is_configured": false, 00:08:46.684 "data_offset": 0, 00:08:46.684 "data_size": 0 00:08:46.684 }, 00:08:46.684 { 00:08:46.684 "name": "BaseBdev3", 00:08:46.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.684 "is_configured": false, 00:08:46.684 "data_offset": 0, 00:08:46.684 "data_size": 0 00:08:46.684 } 00:08:46.684 ] 00:08:46.684 }' 00:08:46.684 12:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.684 12:00:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.943 12:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:46.943 12:00:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.943 12:00:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.943 [2024-11-19 12:00:50.243712] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:46.943 [2024-11-19 12:00:50.243799] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:46.943 12:00:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.943 12:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:46.943 12:00:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.943 12:00:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.943 [2024-11-19 12:00:50.255674] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:46.943 [2024-11-19 12:00:50.255755] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:46.943 [2024-11-19 12:00:50.255782] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:46.943 [2024-11-19 12:00:50.255805] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:46.943 [2024-11-19 12:00:50.255822] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:46.943 [2024-11-19 12:00:50.255843] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:46.943 12:00:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.943 12:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:46.943 12:00:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.943 12:00:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.943 [2024-11-19 12:00:50.301446] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:46.943 BaseBdev1 00:08:46.943 12:00:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.943 12:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:46.943 12:00:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:46.943 12:00:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:46.943 12:00:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:46.943 12:00:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:46.943 12:00:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:46.943 12:00:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:46.943 12:00:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.943 12:00:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.943 12:00:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.943 12:00:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:46.943 12:00:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.943 12:00:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.202 [ 00:08:47.202 { 00:08:47.202 "name": "BaseBdev1", 00:08:47.202 "aliases": [ 00:08:47.202 "6ea12f9b-fc12-4e98-8386-ee88aed53cfe" 00:08:47.202 ], 00:08:47.202 "product_name": "Malloc disk", 00:08:47.202 "block_size": 512, 00:08:47.202 "num_blocks": 65536, 00:08:47.202 "uuid": "6ea12f9b-fc12-4e98-8386-ee88aed53cfe", 00:08:47.202 "assigned_rate_limits": { 00:08:47.202 "rw_ios_per_sec": 0, 00:08:47.202 "rw_mbytes_per_sec": 0, 00:08:47.202 "r_mbytes_per_sec": 0, 00:08:47.202 "w_mbytes_per_sec": 0 00:08:47.202 }, 00:08:47.202 "claimed": true, 00:08:47.202 "claim_type": "exclusive_write", 00:08:47.202 "zoned": false, 00:08:47.202 "supported_io_types": { 00:08:47.202 "read": true, 00:08:47.202 "write": true, 00:08:47.202 "unmap": true, 00:08:47.202 "flush": true, 00:08:47.202 "reset": true, 00:08:47.202 "nvme_admin": false, 00:08:47.202 "nvme_io": false, 00:08:47.202 "nvme_io_md": false, 00:08:47.202 "write_zeroes": true, 00:08:47.202 "zcopy": true, 00:08:47.202 "get_zone_info": false, 00:08:47.202 "zone_management": false, 00:08:47.202 "zone_append": false, 00:08:47.202 "compare": false, 00:08:47.202 "compare_and_write": false, 00:08:47.202 "abort": true, 00:08:47.202 "seek_hole": false, 00:08:47.202 "seek_data": false, 00:08:47.202 "copy": true, 00:08:47.202 "nvme_iov_md": false 00:08:47.202 }, 00:08:47.202 "memory_domains": [ 00:08:47.202 { 00:08:47.202 "dma_device_id": "system", 00:08:47.202 "dma_device_type": 1 00:08:47.202 }, 00:08:47.202 { 00:08:47.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.202 "dma_device_type": 2 00:08:47.202 } 00:08:47.202 ], 00:08:47.202 "driver_specific": {} 00:08:47.202 } 00:08:47.202 ] 00:08:47.202 12:00:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.202 12:00:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:47.202 12:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:47.202 12:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:47.202 12:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:47.202 12:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:47.202 12:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:47.202 12:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:47.202 12:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.202 12:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.202 12:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.202 12:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.202 12:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.202 12:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:47.202 12:00:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.202 12:00:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.202 12:00:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.202 12:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.202 "name": "Existed_Raid", 00:08:47.202 "uuid": "d35cb5b5-fc48-4a1a-84cc-7a9eeddce2c1", 00:08:47.202 "strip_size_kb": 64, 00:08:47.202 "state": "configuring", 00:08:47.202 "raid_level": "raid0", 00:08:47.202 "superblock": true, 00:08:47.202 "num_base_bdevs": 3, 00:08:47.202 "num_base_bdevs_discovered": 1, 00:08:47.202 "num_base_bdevs_operational": 3, 00:08:47.202 "base_bdevs_list": [ 00:08:47.202 { 00:08:47.202 "name": "BaseBdev1", 00:08:47.202 "uuid": "6ea12f9b-fc12-4e98-8386-ee88aed53cfe", 00:08:47.202 "is_configured": true, 00:08:47.202 "data_offset": 2048, 00:08:47.202 "data_size": 63488 00:08:47.202 }, 00:08:47.202 { 00:08:47.202 "name": "BaseBdev2", 00:08:47.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.202 "is_configured": false, 00:08:47.202 "data_offset": 0, 00:08:47.202 "data_size": 0 00:08:47.202 }, 00:08:47.202 { 00:08:47.202 "name": "BaseBdev3", 00:08:47.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.202 "is_configured": false, 00:08:47.202 "data_offset": 0, 00:08:47.202 "data_size": 0 00:08:47.202 } 00:08:47.202 ] 00:08:47.202 }' 00:08:47.202 12:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.202 12:00:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.462 12:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:47.462 12:00:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.462 12:00:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.462 [2024-11-19 12:00:50.740783] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:47.462 [2024-11-19 12:00:50.740848] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:47.462 12:00:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.462 12:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:47.462 12:00:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.462 12:00:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.462 [2024-11-19 12:00:50.752797] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:47.462 [2024-11-19 12:00:50.754712] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:47.462 [2024-11-19 12:00:50.754789] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:47.462 [2024-11-19 12:00:50.754819] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:47.462 [2024-11-19 12:00:50.754844] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:47.462 12:00:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.462 12:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:47.462 12:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:47.462 12:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:47.462 12:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:47.462 12:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:47.462 12:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:47.462 12:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:47.462 12:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:47.462 12:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.462 12:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.462 12:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.462 12:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.462 12:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.462 12:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:47.462 12:00:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.462 12:00:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.462 12:00:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.462 12:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.462 "name": "Existed_Raid", 00:08:47.462 "uuid": "b31fd062-e1e4-4a7a-b051-71ecfe2d9ada", 00:08:47.462 "strip_size_kb": 64, 00:08:47.462 "state": "configuring", 00:08:47.462 "raid_level": "raid0", 00:08:47.462 "superblock": true, 00:08:47.462 "num_base_bdevs": 3, 00:08:47.462 "num_base_bdevs_discovered": 1, 00:08:47.462 "num_base_bdevs_operational": 3, 00:08:47.462 "base_bdevs_list": [ 00:08:47.462 { 00:08:47.462 "name": "BaseBdev1", 00:08:47.462 "uuid": "6ea12f9b-fc12-4e98-8386-ee88aed53cfe", 00:08:47.462 "is_configured": true, 00:08:47.462 "data_offset": 2048, 00:08:47.462 "data_size": 63488 00:08:47.462 }, 00:08:47.462 { 00:08:47.462 "name": "BaseBdev2", 00:08:47.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.462 "is_configured": false, 00:08:47.462 "data_offset": 0, 00:08:47.462 "data_size": 0 00:08:47.462 }, 00:08:47.462 { 00:08:47.462 "name": "BaseBdev3", 00:08:47.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.462 "is_configured": false, 00:08:47.462 "data_offset": 0, 00:08:47.462 "data_size": 0 00:08:47.462 } 00:08:47.462 ] 00:08:47.462 }' 00:08:47.462 12:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.462 12:00:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.033 12:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:48.033 12:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.033 12:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.033 [2024-11-19 12:00:51.261370] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:48.033 BaseBdev2 00:08:48.033 12:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.033 12:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:48.033 12:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:48.033 12:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:48.033 12:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:48.033 12:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:48.033 12:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:48.033 12:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:48.033 12:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.033 12:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.033 12:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.033 12:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:48.033 12:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.033 12:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.033 [ 00:08:48.033 { 00:08:48.033 "name": "BaseBdev2", 00:08:48.033 "aliases": [ 00:08:48.033 "2ba952b1-baea-4316-b107-2378d5963645" 00:08:48.033 ], 00:08:48.033 "product_name": "Malloc disk", 00:08:48.033 "block_size": 512, 00:08:48.033 "num_blocks": 65536, 00:08:48.033 "uuid": "2ba952b1-baea-4316-b107-2378d5963645", 00:08:48.033 "assigned_rate_limits": { 00:08:48.033 "rw_ios_per_sec": 0, 00:08:48.033 "rw_mbytes_per_sec": 0, 00:08:48.033 "r_mbytes_per_sec": 0, 00:08:48.033 "w_mbytes_per_sec": 0 00:08:48.033 }, 00:08:48.033 "claimed": true, 00:08:48.033 "claim_type": "exclusive_write", 00:08:48.033 "zoned": false, 00:08:48.033 "supported_io_types": { 00:08:48.033 "read": true, 00:08:48.033 "write": true, 00:08:48.033 "unmap": true, 00:08:48.033 "flush": true, 00:08:48.033 "reset": true, 00:08:48.033 "nvme_admin": false, 00:08:48.033 "nvme_io": false, 00:08:48.033 "nvme_io_md": false, 00:08:48.033 "write_zeroes": true, 00:08:48.033 "zcopy": true, 00:08:48.033 "get_zone_info": false, 00:08:48.033 "zone_management": false, 00:08:48.033 "zone_append": false, 00:08:48.033 "compare": false, 00:08:48.033 "compare_and_write": false, 00:08:48.033 "abort": true, 00:08:48.033 "seek_hole": false, 00:08:48.033 "seek_data": false, 00:08:48.033 "copy": true, 00:08:48.033 "nvme_iov_md": false 00:08:48.033 }, 00:08:48.033 "memory_domains": [ 00:08:48.033 { 00:08:48.033 "dma_device_id": "system", 00:08:48.033 "dma_device_type": 1 00:08:48.033 }, 00:08:48.033 { 00:08:48.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:48.033 "dma_device_type": 2 00:08:48.033 } 00:08:48.033 ], 00:08:48.033 "driver_specific": {} 00:08:48.033 } 00:08:48.033 ] 00:08:48.033 12:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.033 12:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:48.033 12:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:48.033 12:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:48.033 12:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:48.033 12:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:48.033 12:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:48.033 12:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:48.033 12:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:48.033 12:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:48.033 12:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.033 12:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.033 12:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.033 12:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.033 12:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.033 12:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.033 12:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.033 12:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.033 12:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.033 12:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.033 "name": "Existed_Raid", 00:08:48.033 "uuid": "b31fd062-e1e4-4a7a-b051-71ecfe2d9ada", 00:08:48.033 "strip_size_kb": 64, 00:08:48.033 "state": "configuring", 00:08:48.033 "raid_level": "raid0", 00:08:48.033 "superblock": true, 00:08:48.033 "num_base_bdevs": 3, 00:08:48.033 "num_base_bdevs_discovered": 2, 00:08:48.033 "num_base_bdevs_operational": 3, 00:08:48.033 "base_bdevs_list": [ 00:08:48.033 { 00:08:48.033 "name": "BaseBdev1", 00:08:48.033 "uuid": "6ea12f9b-fc12-4e98-8386-ee88aed53cfe", 00:08:48.033 "is_configured": true, 00:08:48.033 "data_offset": 2048, 00:08:48.033 "data_size": 63488 00:08:48.033 }, 00:08:48.033 { 00:08:48.033 "name": "BaseBdev2", 00:08:48.033 "uuid": "2ba952b1-baea-4316-b107-2378d5963645", 00:08:48.033 "is_configured": true, 00:08:48.033 "data_offset": 2048, 00:08:48.033 "data_size": 63488 00:08:48.033 }, 00:08:48.033 { 00:08:48.033 "name": "BaseBdev3", 00:08:48.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.033 "is_configured": false, 00:08:48.033 "data_offset": 0, 00:08:48.033 "data_size": 0 00:08:48.033 } 00:08:48.033 ] 00:08:48.033 }' 00:08:48.033 12:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.033 12:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.603 12:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:48.603 12:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.603 12:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.603 [2024-11-19 12:00:51.835717] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:48.603 [2024-11-19 12:00:51.836016] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:48.603 [2024-11-19 12:00:51.836041] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:48.603 [2024-11-19 12:00:51.836328] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:48.603 BaseBdev3 00:08:48.603 [2024-11-19 12:00:51.836478] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:48.603 [2024-11-19 12:00:51.836497] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:48.603 [2024-11-19 12:00:51.836656] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:48.603 12:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.603 12:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:48.603 12:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:48.604 12:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:48.604 12:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:48.604 12:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:48.604 12:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:48.604 12:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:48.604 12:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.604 12:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.604 12:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.604 12:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:48.604 12:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.604 12:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.604 [ 00:08:48.604 { 00:08:48.604 "name": "BaseBdev3", 00:08:48.604 "aliases": [ 00:08:48.604 "a62439c8-c6e4-4dd0-8053-4c6a959a5f1d" 00:08:48.604 ], 00:08:48.604 "product_name": "Malloc disk", 00:08:48.604 "block_size": 512, 00:08:48.604 "num_blocks": 65536, 00:08:48.604 "uuid": "a62439c8-c6e4-4dd0-8053-4c6a959a5f1d", 00:08:48.604 "assigned_rate_limits": { 00:08:48.604 "rw_ios_per_sec": 0, 00:08:48.604 "rw_mbytes_per_sec": 0, 00:08:48.604 "r_mbytes_per_sec": 0, 00:08:48.604 "w_mbytes_per_sec": 0 00:08:48.604 }, 00:08:48.604 "claimed": true, 00:08:48.604 "claim_type": "exclusive_write", 00:08:48.604 "zoned": false, 00:08:48.604 "supported_io_types": { 00:08:48.604 "read": true, 00:08:48.604 "write": true, 00:08:48.604 "unmap": true, 00:08:48.604 "flush": true, 00:08:48.604 "reset": true, 00:08:48.604 "nvme_admin": false, 00:08:48.604 "nvme_io": false, 00:08:48.604 "nvme_io_md": false, 00:08:48.604 "write_zeroes": true, 00:08:48.604 "zcopy": true, 00:08:48.604 "get_zone_info": false, 00:08:48.604 "zone_management": false, 00:08:48.604 "zone_append": false, 00:08:48.604 "compare": false, 00:08:48.604 "compare_and_write": false, 00:08:48.604 "abort": true, 00:08:48.604 "seek_hole": false, 00:08:48.604 "seek_data": false, 00:08:48.604 "copy": true, 00:08:48.604 "nvme_iov_md": false 00:08:48.604 }, 00:08:48.604 "memory_domains": [ 00:08:48.604 { 00:08:48.604 "dma_device_id": "system", 00:08:48.604 "dma_device_type": 1 00:08:48.604 }, 00:08:48.604 { 00:08:48.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:48.604 "dma_device_type": 2 00:08:48.604 } 00:08:48.604 ], 00:08:48.604 "driver_specific": {} 00:08:48.604 } 00:08:48.604 ] 00:08:48.604 12:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.604 12:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:48.604 12:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:48.604 12:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:48.604 12:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:48.604 12:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:48.604 12:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:48.604 12:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:48.604 12:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:48.604 12:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:48.604 12:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.604 12:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.604 12:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.604 12:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.604 12:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.604 12:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.604 12:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.604 12:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.604 12:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.604 12:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.604 "name": "Existed_Raid", 00:08:48.604 "uuid": "b31fd062-e1e4-4a7a-b051-71ecfe2d9ada", 00:08:48.604 "strip_size_kb": 64, 00:08:48.604 "state": "online", 00:08:48.604 "raid_level": "raid0", 00:08:48.604 "superblock": true, 00:08:48.604 "num_base_bdevs": 3, 00:08:48.604 "num_base_bdevs_discovered": 3, 00:08:48.604 "num_base_bdevs_operational": 3, 00:08:48.604 "base_bdevs_list": [ 00:08:48.604 { 00:08:48.604 "name": "BaseBdev1", 00:08:48.604 "uuid": "6ea12f9b-fc12-4e98-8386-ee88aed53cfe", 00:08:48.604 "is_configured": true, 00:08:48.604 "data_offset": 2048, 00:08:48.604 "data_size": 63488 00:08:48.604 }, 00:08:48.604 { 00:08:48.604 "name": "BaseBdev2", 00:08:48.604 "uuid": "2ba952b1-baea-4316-b107-2378d5963645", 00:08:48.604 "is_configured": true, 00:08:48.604 "data_offset": 2048, 00:08:48.604 "data_size": 63488 00:08:48.604 }, 00:08:48.604 { 00:08:48.604 "name": "BaseBdev3", 00:08:48.604 "uuid": "a62439c8-c6e4-4dd0-8053-4c6a959a5f1d", 00:08:48.604 "is_configured": true, 00:08:48.604 "data_offset": 2048, 00:08:48.604 "data_size": 63488 00:08:48.604 } 00:08:48.604 ] 00:08:48.604 }' 00:08:48.604 12:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.604 12:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.174 12:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:49.174 12:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:49.174 12:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:49.174 12:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:49.174 12:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:49.174 12:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:49.174 12:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:49.174 12:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:49.174 12:00:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.174 12:00:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.174 [2024-11-19 12:00:52.287379] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:49.174 12:00:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.174 12:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:49.174 "name": "Existed_Raid", 00:08:49.174 "aliases": [ 00:08:49.174 "b31fd062-e1e4-4a7a-b051-71ecfe2d9ada" 00:08:49.174 ], 00:08:49.174 "product_name": "Raid Volume", 00:08:49.174 "block_size": 512, 00:08:49.174 "num_blocks": 190464, 00:08:49.174 "uuid": "b31fd062-e1e4-4a7a-b051-71ecfe2d9ada", 00:08:49.174 "assigned_rate_limits": { 00:08:49.174 "rw_ios_per_sec": 0, 00:08:49.174 "rw_mbytes_per_sec": 0, 00:08:49.174 "r_mbytes_per_sec": 0, 00:08:49.174 "w_mbytes_per_sec": 0 00:08:49.174 }, 00:08:49.174 "claimed": false, 00:08:49.174 "zoned": false, 00:08:49.174 "supported_io_types": { 00:08:49.174 "read": true, 00:08:49.174 "write": true, 00:08:49.174 "unmap": true, 00:08:49.174 "flush": true, 00:08:49.174 "reset": true, 00:08:49.174 "nvme_admin": false, 00:08:49.174 "nvme_io": false, 00:08:49.174 "nvme_io_md": false, 00:08:49.174 "write_zeroes": true, 00:08:49.174 "zcopy": false, 00:08:49.174 "get_zone_info": false, 00:08:49.174 "zone_management": false, 00:08:49.174 "zone_append": false, 00:08:49.174 "compare": false, 00:08:49.174 "compare_and_write": false, 00:08:49.174 "abort": false, 00:08:49.174 "seek_hole": false, 00:08:49.174 "seek_data": false, 00:08:49.174 "copy": false, 00:08:49.174 "nvme_iov_md": false 00:08:49.174 }, 00:08:49.174 "memory_domains": [ 00:08:49.174 { 00:08:49.174 "dma_device_id": "system", 00:08:49.174 "dma_device_type": 1 00:08:49.174 }, 00:08:49.174 { 00:08:49.174 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.174 "dma_device_type": 2 00:08:49.174 }, 00:08:49.174 { 00:08:49.174 "dma_device_id": "system", 00:08:49.174 "dma_device_type": 1 00:08:49.174 }, 00:08:49.174 { 00:08:49.174 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.174 "dma_device_type": 2 00:08:49.174 }, 00:08:49.174 { 00:08:49.174 "dma_device_id": "system", 00:08:49.174 "dma_device_type": 1 00:08:49.174 }, 00:08:49.174 { 00:08:49.174 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.174 "dma_device_type": 2 00:08:49.174 } 00:08:49.174 ], 00:08:49.174 "driver_specific": { 00:08:49.174 "raid": { 00:08:49.174 "uuid": "b31fd062-e1e4-4a7a-b051-71ecfe2d9ada", 00:08:49.174 "strip_size_kb": 64, 00:08:49.174 "state": "online", 00:08:49.174 "raid_level": "raid0", 00:08:49.174 "superblock": true, 00:08:49.174 "num_base_bdevs": 3, 00:08:49.174 "num_base_bdevs_discovered": 3, 00:08:49.174 "num_base_bdevs_operational": 3, 00:08:49.174 "base_bdevs_list": [ 00:08:49.174 { 00:08:49.174 "name": "BaseBdev1", 00:08:49.174 "uuid": "6ea12f9b-fc12-4e98-8386-ee88aed53cfe", 00:08:49.174 "is_configured": true, 00:08:49.174 "data_offset": 2048, 00:08:49.174 "data_size": 63488 00:08:49.174 }, 00:08:49.174 { 00:08:49.174 "name": "BaseBdev2", 00:08:49.174 "uuid": "2ba952b1-baea-4316-b107-2378d5963645", 00:08:49.174 "is_configured": true, 00:08:49.174 "data_offset": 2048, 00:08:49.174 "data_size": 63488 00:08:49.174 }, 00:08:49.174 { 00:08:49.174 "name": "BaseBdev3", 00:08:49.174 "uuid": "a62439c8-c6e4-4dd0-8053-4c6a959a5f1d", 00:08:49.174 "is_configured": true, 00:08:49.174 "data_offset": 2048, 00:08:49.174 "data_size": 63488 00:08:49.174 } 00:08:49.174 ] 00:08:49.174 } 00:08:49.174 } 00:08:49.174 }' 00:08:49.174 12:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:49.174 12:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:49.174 BaseBdev2 00:08:49.174 BaseBdev3' 00:08:49.174 12:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:49.174 12:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:49.174 12:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:49.174 12:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:49.174 12:00:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.174 12:00:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.174 12:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:49.174 12:00:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.174 12:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:49.174 12:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:49.174 12:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:49.174 12:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:49.174 12:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:49.174 12:00:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.174 12:00:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.175 12:00:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.175 12:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:49.175 12:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:49.175 12:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:49.175 12:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:49.175 12:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:49.175 12:00:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.175 12:00:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.434 12:00:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.434 12:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:49.434 12:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:49.434 12:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:49.434 12:00:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.434 12:00:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.434 [2024-11-19 12:00:52.594502] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:49.434 [2024-11-19 12:00:52.594533] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:49.434 [2024-11-19 12:00:52.594585] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:49.434 12:00:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.434 12:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:49.434 12:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:49.434 12:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:49.434 12:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:49.434 12:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:49.434 12:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:49.434 12:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:49.434 12:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:49.434 12:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:49.434 12:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.434 12:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:49.434 12:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.434 12:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.434 12:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.434 12:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.434 12:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.434 12:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.434 12:00:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.434 12:00:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.434 12:00:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.434 12:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.434 "name": "Existed_Raid", 00:08:49.434 "uuid": "b31fd062-e1e4-4a7a-b051-71ecfe2d9ada", 00:08:49.434 "strip_size_kb": 64, 00:08:49.434 "state": "offline", 00:08:49.434 "raid_level": "raid0", 00:08:49.434 "superblock": true, 00:08:49.434 "num_base_bdevs": 3, 00:08:49.434 "num_base_bdevs_discovered": 2, 00:08:49.435 "num_base_bdevs_operational": 2, 00:08:49.435 "base_bdevs_list": [ 00:08:49.435 { 00:08:49.435 "name": null, 00:08:49.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.435 "is_configured": false, 00:08:49.435 "data_offset": 0, 00:08:49.435 "data_size": 63488 00:08:49.435 }, 00:08:49.435 { 00:08:49.435 "name": "BaseBdev2", 00:08:49.435 "uuid": "2ba952b1-baea-4316-b107-2378d5963645", 00:08:49.435 "is_configured": true, 00:08:49.435 "data_offset": 2048, 00:08:49.435 "data_size": 63488 00:08:49.435 }, 00:08:49.435 { 00:08:49.435 "name": "BaseBdev3", 00:08:49.435 "uuid": "a62439c8-c6e4-4dd0-8053-4c6a959a5f1d", 00:08:49.435 "is_configured": true, 00:08:49.435 "data_offset": 2048, 00:08:49.435 "data_size": 63488 00:08:49.435 } 00:08:49.435 ] 00:08:49.435 }' 00:08:49.435 12:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.435 12:00:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.005 12:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:50.005 12:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:50.005 12:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.005 12:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:50.005 12:00:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.005 12:00:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.005 12:00:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.005 12:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:50.005 12:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:50.005 12:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:50.005 12:00:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.005 12:00:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.005 [2024-11-19 12:00:53.193030] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:50.005 12:00:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.005 12:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:50.005 12:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:50.005 12:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.005 12:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:50.005 12:00:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.005 12:00:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.005 12:00:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.005 12:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:50.005 12:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:50.005 12:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:50.005 12:00:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.005 12:00:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.005 [2024-11-19 12:00:53.341829] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:50.005 [2024-11-19 12:00:53.341882] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:50.265 12:00:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.265 12:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:50.265 12:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:50.265 12:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.265 12:00:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.265 12:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:50.265 12:00:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.265 12:00:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.265 12:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:50.265 12:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:50.265 12:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:50.265 12:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:50.265 12:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:50.265 12:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:50.265 12:00:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.265 12:00:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.265 BaseBdev2 00:08:50.265 12:00:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.265 12:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:50.265 12:00:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:50.265 12:00:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:50.265 12:00:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:50.265 12:00:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:50.265 12:00:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:50.265 12:00:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:50.265 12:00:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.265 12:00:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.265 12:00:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.265 12:00:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:50.265 12:00:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.265 12:00:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.265 [ 00:08:50.265 { 00:08:50.265 "name": "BaseBdev2", 00:08:50.265 "aliases": [ 00:08:50.265 "48160fc8-f2e2-4590-b441-5fddb6b1b2fa" 00:08:50.265 ], 00:08:50.265 "product_name": "Malloc disk", 00:08:50.265 "block_size": 512, 00:08:50.265 "num_blocks": 65536, 00:08:50.265 "uuid": "48160fc8-f2e2-4590-b441-5fddb6b1b2fa", 00:08:50.265 "assigned_rate_limits": { 00:08:50.265 "rw_ios_per_sec": 0, 00:08:50.265 "rw_mbytes_per_sec": 0, 00:08:50.265 "r_mbytes_per_sec": 0, 00:08:50.265 "w_mbytes_per_sec": 0 00:08:50.265 }, 00:08:50.265 "claimed": false, 00:08:50.265 "zoned": false, 00:08:50.265 "supported_io_types": { 00:08:50.265 "read": true, 00:08:50.265 "write": true, 00:08:50.265 "unmap": true, 00:08:50.265 "flush": true, 00:08:50.265 "reset": true, 00:08:50.265 "nvme_admin": false, 00:08:50.265 "nvme_io": false, 00:08:50.265 "nvme_io_md": false, 00:08:50.265 "write_zeroes": true, 00:08:50.265 "zcopy": true, 00:08:50.265 "get_zone_info": false, 00:08:50.265 "zone_management": false, 00:08:50.265 "zone_append": false, 00:08:50.265 "compare": false, 00:08:50.265 "compare_and_write": false, 00:08:50.265 "abort": true, 00:08:50.265 "seek_hole": false, 00:08:50.265 "seek_data": false, 00:08:50.265 "copy": true, 00:08:50.265 "nvme_iov_md": false 00:08:50.265 }, 00:08:50.265 "memory_domains": [ 00:08:50.265 { 00:08:50.265 "dma_device_id": "system", 00:08:50.265 "dma_device_type": 1 00:08:50.265 }, 00:08:50.265 { 00:08:50.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.265 "dma_device_type": 2 00:08:50.265 } 00:08:50.265 ], 00:08:50.265 "driver_specific": {} 00:08:50.265 } 00:08:50.265 ] 00:08:50.265 12:00:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.265 12:00:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:50.265 12:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:50.265 12:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:50.265 12:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:50.265 12:00:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.265 12:00:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.265 BaseBdev3 00:08:50.266 12:00:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.266 12:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:50.266 12:00:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:50.266 12:00:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:50.266 12:00:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:50.266 12:00:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:50.266 12:00:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:50.266 12:00:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:50.266 12:00:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.266 12:00:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.266 12:00:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.266 12:00:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:50.266 12:00:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.266 12:00:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.266 [ 00:08:50.266 { 00:08:50.266 "name": "BaseBdev3", 00:08:50.266 "aliases": [ 00:08:50.266 "ab1d19f9-e531-4c3c-9519-1ff8d235206a" 00:08:50.266 ], 00:08:50.266 "product_name": "Malloc disk", 00:08:50.266 "block_size": 512, 00:08:50.266 "num_blocks": 65536, 00:08:50.266 "uuid": "ab1d19f9-e531-4c3c-9519-1ff8d235206a", 00:08:50.266 "assigned_rate_limits": { 00:08:50.266 "rw_ios_per_sec": 0, 00:08:50.266 "rw_mbytes_per_sec": 0, 00:08:50.266 "r_mbytes_per_sec": 0, 00:08:50.266 "w_mbytes_per_sec": 0 00:08:50.266 }, 00:08:50.266 "claimed": false, 00:08:50.266 "zoned": false, 00:08:50.266 "supported_io_types": { 00:08:50.266 "read": true, 00:08:50.266 "write": true, 00:08:50.266 "unmap": true, 00:08:50.266 "flush": true, 00:08:50.526 "reset": true, 00:08:50.526 "nvme_admin": false, 00:08:50.526 "nvme_io": false, 00:08:50.526 "nvme_io_md": false, 00:08:50.526 "write_zeroes": true, 00:08:50.526 "zcopy": true, 00:08:50.526 "get_zone_info": false, 00:08:50.526 "zone_management": false, 00:08:50.526 "zone_append": false, 00:08:50.526 "compare": false, 00:08:50.526 "compare_and_write": false, 00:08:50.526 "abort": true, 00:08:50.526 "seek_hole": false, 00:08:50.526 "seek_data": false, 00:08:50.526 "copy": true, 00:08:50.526 "nvme_iov_md": false 00:08:50.526 }, 00:08:50.526 "memory_domains": [ 00:08:50.526 { 00:08:50.526 "dma_device_id": "system", 00:08:50.526 "dma_device_type": 1 00:08:50.526 }, 00:08:50.526 { 00:08:50.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.526 "dma_device_type": 2 00:08:50.526 } 00:08:50.526 ], 00:08:50.526 "driver_specific": {} 00:08:50.526 } 00:08:50.526 ] 00:08:50.526 12:00:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.526 12:00:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:50.526 12:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:50.526 12:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:50.526 12:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:50.526 12:00:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.526 12:00:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.526 [2024-11-19 12:00:53.654219] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:50.526 [2024-11-19 12:00:53.654306] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:50.526 [2024-11-19 12:00:53.654350] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:50.526 [2024-11-19 12:00:53.656140] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:50.526 12:00:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.526 12:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:50.526 12:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:50.526 12:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:50.526 12:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:50.526 12:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:50.526 12:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:50.526 12:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.526 12:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.526 12:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.526 12:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.526 12:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.526 12:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:50.526 12:00:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.526 12:00:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.526 12:00:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.526 12:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.526 "name": "Existed_Raid", 00:08:50.526 "uuid": "a7c6d7b2-c32f-485c-b684-69a92814cd5a", 00:08:50.526 "strip_size_kb": 64, 00:08:50.526 "state": "configuring", 00:08:50.526 "raid_level": "raid0", 00:08:50.526 "superblock": true, 00:08:50.526 "num_base_bdevs": 3, 00:08:50.526 "num_base_bdevs_discovered": 2, 00:08:50.526 "num_base_bdevs_operational": 3, 00:08:50.526 "base_bdevs_list": [ 00:08:50.526 { 00:08:50.526 "name": "BaseBdev1", 00:08:50.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.526 "is_configured": false, 00:08:50.526 "data_offset": 0, 00:08:50.526 "data_size": 0 00:08:50.526 }, 00:08:50.526 { 00:08:50.526 "name": "BaseBdev2", 00:08:50.526 "uuid": "48160fc8-f2e2-4590-b441-5fddb6b1b2fa", 00:08:50.526 "is_configured": true, 00:08:50.526 "data_offset": 2048, 00:08:50.526 "data_size": 63488 00:08:50.526 }, 00:08:50.526 { 00:08:50.526 "name": "BaseBdev3", 00:08:50.526 "uuid": "ab1d19f9-e531-4c3c-9519-1ff8d235206a", 00:08:50.526 "is_configured": true, 00:08:50.526 "data_offset": 2048, 00:08:50.526 "data_size": 63488 00:08:50.526 } 00:08:50.526 ] 00:08:50.526 }' 00:08:50.526 12:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.526 12:00:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.787 12:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:50.787 12:00:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.787 12:00:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.787 [2024-11-19 12:00:54.081521] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:50.787 12:00:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.787 12:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:50.787 12:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:50.787 12:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:50.787 12:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:50.787 12:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:50.787 12:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:50.787 12:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.787 12:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.787 12:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.787 12:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.787 12:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.787 12:00:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.787 12:00:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.787 12:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:50.787 12:00:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.787 12:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.787 "name": "Existed_Raid", 00:08:50.787 "uuid": "a7c6d7b2-c32f-485c-b684-69a92814cd5a", 00:08:50.787 "strip_size_kb": 64, 00:08:50.787 "state": "configuring", 00:08:50.787 "raid_level": "raid0", 00:08:50.787 "superblock": true, 00:08:50.787 "num_base_bdevs": 3, 00:08:50.787 "num_base_bdevs_discovered": 1, 00:08:50.787 "num_base_bdevs_operational": 3, 00:08:50.787 "base_bdevs_list": [ 00:08:50.787 { 00:08:50.787 "name": "BaseBdev1", 00:08:50.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.787 "is_configured": false, 00:08:50.787 "data_offset": 0, 00:08:50.787 "data_size": 0 00:08:50.787 }, 00:08:50.787 { 00:08:50.787 "name": null, 00:08:50.787 "uuid": "48160fc8-f2e2-4590-b441-5fddb6b1b2fa", 00:08:50.787 "is_configured": false, 00:08:50.787 "data_offset": 0, 00:08:50.787 "data_size": 63488 00:08:50.787 }, 00:08:50.787 { 00:08:50.787 "name": "BaseBdev3", 00:08:50.787 "uuid": "ab1d19f9-e531-4c3c-9519-1ff8d235206a", 00:08:50.787 "is_configured": true, 00:08:50.787 "data_offset": 2048, 00:08:50.787 "data_size": 63488 00:08:50.787 } 00:08:50.787 ] 00:08:50.787 }' 00:08:50.787 12:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.787 12:00:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.356 12:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.356 12:00:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.356 12:00:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.356 12:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:51.356 12:00:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.356 12:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:51.356 12:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:51.356 12:00:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.356 12:00:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.356 [2024-11-19 12:00:54.589407] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:51.356 BaseBdev1 00:08:51.356 12:00:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.356 12:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:51.356 12:00:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:51.356 12:00:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:51.356 12:00:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:51.356 12:00:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:51.356 12:00:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:51.356 12:00:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:51.356 12:00:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.356 12:00:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.356 12:00:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.356 12:00:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:51.356 12:00:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.356 12:00:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.356 [ 00:08:51.356 { 00:08:51.356 "name": "BaseBdev1", 00:08:51.356 "aliases": [ 00:08:51.356 "c51c28ab-eca8-4501-b679-802fa9979f59" 00:08:51.356 ], 00:08:51.356 "product_name": "Malloc disk", 00:08:51.356 "block_size": 512, 00:08:51.356 "num_blocks": 65536, 00:08:51.356 "uuid": "c51c28ab-eca8-4501-b679-802fa9979f59", 00:08:51.356 "assigned_rate_limits": { 00:08:51.356 "rw_ios_per_sec": 0, 00:08:51.356 "rw_mbytes_per_sec": 0, 00:08:51.356 "r_mbytes_per_sec": 0, 00:08:51.356 "w_mbytes_per_sec": 0 00:08:51.356 }, 00:08:51.356 "claimed": true, 00:08:51.356 "claim_type": "exclusive_write", 00:08:51.356 "zoned": false, 00:08:51.356 "supported_io_types": { 00:08:51.356 "read": true, 00:08:51.356 "write": true, 00:08:51.356 "unmap": true, 00:08:51.356 "flush": true, 00:08:51.356 "reset": true, 00:08:51.356 "nvme_admin": false, 00:08:51.356 "nvme_io": false, 00:08:51.356 "nvme_io_md": false, 00:08:51.356 "write_zeroes": true, 00:08:51.356 "zcopy": true, 00:08:51.356 "get_zone_info": false, 00:08:51.356 "zone_management": false, 00:08:51.356 "zone_append": false, 00:08:51.356 "compare": false, 00:08:51.356 "compare_and_write": false, 00:08:51.356 "abort": true, 00:08:51.356 "seek_hole": false, 00:08:51.356 "seek_data": false, 00:08:51.356 "copy": true, 00:08:51.356 "nvme_iov_md": false 00:08:51.356 }, 00:08:51.356 "memory_domains": [ 00:08:51.356 { 00:08:51.356 "dma_device_id": "system", 00:08:51.356 "dma_device_type": 1 00:08:51.356 }, 00:08:51.356 { 00:08:51.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.356 "dma_device_type": 2 00:08:51.356 } 00:08:51.356 ], 00:08:51.356 "driver_specific": {} 00:08:51.356 } 00:08:51.356 ] 00:08:51.356 12:00:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.356 12:00:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:51.356 12:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:51.356 12:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:51.356 12:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:51.356 12:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:51.356 12:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:51.356 12:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:51.356 12:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.356 12:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.356 12:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.356 12:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.356 12:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.356 12:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:51.356 12:00:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.356 12:00:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.357 12:00:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.357 12:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.357 "name": "Existed_Raid", 00:08:51.357 "uuid": "a7c6d7b2-c32f-485c-b684-69a92814cd5a", 00:08:51.357 "strip_size_kb": 64, 00:08:51.357 "state": "configuring", 00:08:51.357 "raid_level": "raid0", 00:08:51.357 "superblock": true, 00:08:51.357 "num_base_bdevs": 3, 00:08:51.357 "num_base_bdevs_discovered": 2, 00:08:51.357 "num_base_bdevs_operational": 3, 00:08:51.357 "base_bdevs_list": [ 00:08:51.357 { 00:08:51.357 "name": "BaseBdev1", 00:08:51.357 "uuid": "c51c28ab-eca8-4501-b679-802fa9979f59", 00:08:51.357 "is_configured": true, 00:08:51.357 "data_offset": 2048, 00:08:51.357 "data_size": 63488 00:08:51.357 }, 00:08:51.357 { 00:08:51.357 "name": null, 00:08:51.357 "uuid": "48160fc8-f2e2-4590-b441-5fddb6b1b2fa", 00:08:51.357 "is_configured": false, 00:08:51.357 "data_offset": 0, 00:08:51.357 "data_size": 63488 00:08:51.357 }, 00:08:51.357 { 00:08:51.357 "name": "BaseBdev3", 00:08:51.357 "uuid": "ab1d19f9-e531-4c3c-9519-1ff8d235206a", 00:08:51.357 "is_configured": true, 00:08:51.357 "data_offset": 2048, 00:08:51.357 "data_size": 63488 00:08:51.357 } 00:08:51.357 ] 00:08:51.357 }' 00:08:51.357 12:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.357 12:00:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.925 12:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.925 12:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:51.925 12:00:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.925 12:00:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.925 12:00:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.925 12:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:51.925 12:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:51.925 12:00:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.925 12:00:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.925 [2024-11-19 12:00:55.092601] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:51.925 12:00:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.925 12:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:51.925 12:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:51.925 12:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:51.925 12:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:51.925 12:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:51.925 12:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:51.925 12:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.925 12:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.925 12:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.925 12:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.925 12:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.925 12:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:51.925 12:00:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.925 12:00:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.925 12:00:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.925 12:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.926 "name": "Existed_Raid", 00:08:51.926 "uuid": "a7c6d7b2-c32f-485c-b684-69a92814cd5a", 00:08:51.926 "strip_size_kb": 64, 00:08:51.926 "state": "configuring", 00:08:51.926 "raid_level": "raid0", 00:08:51.926 "superblock": true, 00:08:51.926 "num_base_bdevs": 3, 00:08:51.926 "num_base_bdevs_discovered": 1, 00:08:51.926 "num_base_bdevs_operational": 3, 00:08:51.926 "base_bdevs_list": [ 00:08:51.926 { 00:08:51.926 "name": "BaseBdev1", 00:08:51.926 "uuid": "c51c28ab-eca8-4501-b679-802fa9979f59", 00:08:51.926 "is_configured": true, 00:08:51.926 "data_offset": 2048, 00:08:51.926 "data_size": 63488 00:08:51.926 }, 00:08:51.926 { 00:08:51.926 "name": null, 00:08:51.926 "uuid": "48160fc8-f2e2-4590-b441-5fddb6b1b2fa", 00:08:51.926 "is_configured": false, 00:08:51.926 "data_offset": 0, 00:08:51.926 "data_size": 63488 00:08:51.926 }, 00:08:51.926 { 00:08:51.926 "name": null, 00:08:51.926 "uuid": "ab1d19f9-e531-4c3c-9519-1ff8d235206a", 00:08:51.926 "is_configured": false, 00:08:51.926 "data_offset": 0, 00:08:51.926 "data_size": 63488 00:08:51.926 } 00:08:51.926 ] 00:08:51.926 }' 00:08:51.926 12:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.926 12:00:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.494 12:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.494 12:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:52.494 12:00:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.494 12:00:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.494 12:00:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.494 12:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:52.494 12:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:52.494 12:00:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.494 12:00:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.494 [2024-11-19 12:00:55.615767] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:52.494 12:00:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.494 12:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:52.494 12:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:52.494 12:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:52.494 12:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:52.494 12:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:52.494 12:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:52.494 12:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.494 12:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.494 12:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.494 12:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.494 12:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.494 12:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:52.494 12:00:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.494 12:00:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.494 12:00:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.494 12:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.494 "name": "Existed_Raid", 00:08:52.494 "uuid": "a7c6d7b2-c32f-485c-b684-69a92814cd5a", 00:08:52.494 "strip_size_kb": 64, 00:08:52.494 "state": "configuring", 00:08:52.494 "raid_level": "raid0", 00:08:52.494 "superblock": true, 00:08:52.494 "num_base_bdevs": 3, 00:08:52.494 "num_base_bdevs_discovered": 2, 00:08:52.494 "num_base_bdevs_operational": 3, 00:08:52.494 "base_bdevs_list": [ 00:08:52.494 { 00:08:52.494 "name": "BaseBdev1", 00:08:52.494 "uuid": "c51c28ab-eca8-4501-b679-802fa9979f59", 00:08:52.494 "is_configured": true, 00:08:52.494 "data_offset": 2048, 00:08:52.494 "data_size": 63488 00:08:52.494 }, 00:08:52.494 { 00:08:52.494 "name": null, 00:08:52.494 "uuid": "48160fc8-f2e2-4590-b441-5fddb6b1b2fa", 00:08:52.494 "is_configured": false, 00:08:52.494 "data_offset": 0, 00:08:52.494 "data_size": 63488 00:08:52.494 }, 00:08:52.494 { 00:08:52.494 "name": "BaseBdev3", 00:08:52.494 "uuid": "ab1d19f9-e531-4c3c-9519-1ff8d235206a", 00:08:52.494 "is_configured": true, 00:08:52.494 "data_offset": 2048, 00:08:52.494 "data_size": 63488 00:08:52.494 } 00:08:52.494 ] 00:08:52.494 }' 00:08:52.494 12:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.494 12:00:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.754 12:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:52.754 12:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.754 12:00:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.754 12:00:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.754 12:00:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.754 12:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:52.754 12:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:52.754 12:00:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.754 12:00:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.754 [2024-11-19 12:00:56.087013] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:53.014 12:00:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.014 12:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:53.014 12:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:53.014 12:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:53.014 12:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:53.014 12:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.014 12:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:53.014 12:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.014 12:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.014 12:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.014 12:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.014 12:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.014 12:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.014 12:00:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.014 12:00:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.014 12:00:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.014 12:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.014 "name": "Existed_Raid", 00:08:53.014 "uuid": "a7c6d7b2-c32f-485c-b684-69a92814cd5a", 00:08:53.014 "strip_size_kb": 64, 00:08:53.014 "state": "configuring", 00:08:53.014 "raid_level": "raid0", 00:08:53.014 "superblock": true, 00:08:53.014 "num_base_bdevs": 3, 00:08:53.014 "num_base_bdevs_discovered": 1, 00:08:53.014 "num_base_bdevs_operational": 3, 00:08:53.014 "base_bdevs_list": [ 00:08:53.014 { 00:08:53.014 "name": null, 00:08:53.014 "uuid": "c51c28ab-eca8-4501-b679-802fa9979f59", 00:08:53.014 "is_configured": false, 00:08:53.014 "data_offset": 0, 00:08:53.014 "data_size": 63488 00:08:53.014 }, 00:08:53.014 { 00:08:53.014 "name": null, 00:08:53.014 "uuid": "48160fc8-f2e2-4590-b441-5fddb6b1b2fa", 00:08:53.014 "is_configured": false, 00:08:53.014 "data_offset": 0, 00:08:53.014 "data_size": 63488 00:08:53.014 }, 00:08:53.014 { 00:08:53.014 "name": "BaseBdev3", 00:08:53.014 "uuid": "ab1d19f9-e531-4c3c-9519-1ff8d235206a", 00:08:53.014 "is_configured": true, 00:08:53.014 "data_offset": 2048, 00:08:53.014 "data_size": 63488 00:08:53.014 } 00:08:53.014 ] 00:08:53.014 }' 00:08:53.014 12:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.014 12:00:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.272 12:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.272 12:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:53.272 12:00:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.272 12:00:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.538 12:00:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.538 12:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:53.538 12:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:53.538 12:00:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.538 12:00:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.538 [2024-11-19 12:00:56.691938] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:53.538 12:00:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.538 12:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:53.538 12:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:53.538 12:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:53.538 12:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:53.538 12:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.538 12:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:53.538 12:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.538 12:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.538 12:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.538 12:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.538 12:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.538 12:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.538 12:00:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.538 12:00:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.538 12:00:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.538 12:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.538 "name": "Existed_Raid", 00:08:53.538 "uuid": "a7c6d7b2-c32f-485c-b684-69a92814cd5a", 00:08:53.538 "strip_size_kb": 64, 00:08:53.538 "state": "configuring", 00:08:53.538 "raid_level": "raid0", 00:08:53.538 "superblock": true, 00:08:53.538 "num_base_bdevs": 3, 00:08:53.538 "num_base_bdevs_discovered": 2, 00:08:53.538 "num_base_bdevs_operational": 3, 00:08:53.538 "base_bdevs_list": [ 00:08:53.538 { 00:08:53.538 "name": null, 00:08:53.538 "uuid": "c51c28ab-eca8-4501-b679-802fa9979f59", 00:08:53.538 "is_configured": false, 00:08:53.538 "data_offset": 0, 00:08:53.538 "data_size": 63488 00:08:53.538 }, 00:08:53.538 { 00:08:53.538 "name": "BaseBdev2", 00:08:53.538 "uuid": "48160fc8-f2e2-4590-b441-5fddb6b1b2fa", 00:08:53.538 "is_configured": true, 00:08:53.538 "data_offset": 2048, 00:08:53.538 "data_size": 63488 00:08:53.538 }, 00:08:53.538 { 00:08:53.538 "name": "BaseBdev3", 00:08:53.538 "uuid": "ab1d19f9-e531-4c3c-9519-1ff8d235206a", 00:08:53.538 "is_configured": true, 00:08:53.538 "data_offset": 2048, 00:08:53.538 "data_size": 63488 00:08:53.538 } 00:08:53.538 ] 00:08:53.538 }' 00:08:53.538 12:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.538 12:00:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.798 12:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.798 12:00:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.798 12:00:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.798 12:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:53.798 12:00:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.798 12:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:53.798 12:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:53.798 12:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.798 12:00:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.798 12:00:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.798 12:00:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.058 12:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c51c28ab-eca8-4501-b679-802fa9979f59 00:08:54.058 12:00:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.058 12:00:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.058 [2024-11-19 12:00:57.214815] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:54.058 [2024-11-19 12:00:57.215105] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:54.058 [2024-11-19 12:00:57.215163] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:54.058 [2024-11-19 12:00:57.215419] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:54.058 [2024-11-19 12:00:57.215592] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:54.058 [2024-11-19 12:00:57.215633] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:54.058 NewBaseBdev 00:08:54.058 [2024-11-19 12:00:57.215799] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:54.058 12:00:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.058 12:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:54.058 12:00:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:54.058 12:00:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:54.058 12:00:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:54.058 12:00:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:54.058 12:00:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:54.058 12:00:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:54.058 12:00:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.058 12:00:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.058 12:00:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.058 12:00:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:54.058 12:00:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.058 12:00:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.058 [ 00:08:54.058 { 00:08:54.058 "name": "NewBaseBdev", 00:08:54.058 "aliases": [ 00:08:54.058 "c51c28ab-eca8-4501-b679-802fa9979f59" 00:08:54.058 ], 00:08:54.058 "product_name": "Malloc disk", 00:08:54.058 "block_size": 512, 00:08:54.058 "num_blocks": 65536, 00:08:54.058 "uuid": "c51c28ab-eca8-4501-b679-802fa9979f59", 00:08:54.058 "assigned_rate_limits": { 00:08:54.058 "rw_ios_per_sec": 0, 00:08:54.058 "rw_mbytes_per_sec": 0, 00:08:54.058 "r_mbytes_per_sec": 0, 00:08:54.058 "w_mbytes_per_sec": 0 00:08:54.058 }, 00:08:54.058 "claimed": true, 00:08:54.058 "claim_type": "exclusive_write", 00:08:54.058 "zoned": false, 00:08:54.058 "supported_io_types": { 00:08:54.058 "read": true, 00:08:54.058 "write": true, 00:08:54.058 "unmap": true, 00:08:54.058 "flush": true, 00:08:54.058 "reset": true, 00:08:54.058 "nvme_admin": false, 00:08:54.058 "nvme_io": false, 00:08:54.058 "nvme_io_md": false, 00:08:54.058 "write_zeroes": true, 00:08:54.058 "zcopy": true, 00:08:54.058 "get_zone_info": false, 00:08:54.058 "zone_management": false, 00:08:54.058 "zone_append": false, 00:08:54.058 "compare": false, 00:08:54.058 "compare_and_write": false, 00:08:54.058 "abort": true, 00:08:54.058 "seek_hole": false, 00:08:54.058 "seek_data": false, 00:08:54.058 "copy": true, 00:08:54.058 "nvme_iov_md": false 00:08:54.058 }, 00:08:54.058 "memory_domains": [ 00:08:54.058 { 00:08:54.058 "dma_device_id": "system", 00:08:54.058 "dma_device_type": 1 00:08:54.058 }, 00:08:54.058 { 00:08:54.058 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.058 "dma_device_type": 2 00:08:54.058 } 00:08:54.058 ], 00:08:54.058 "driver_specific": {} 00:08:54.058 } 00:08:54.058 ] 00:08:54.058 12:00:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.058 12:00:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:54.058 12:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:54.058 12:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:54.058 12:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:54.058 12:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:54.058 12:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:54.058 12:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:54.058 12:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.058 12:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.058 12:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.058 12:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.058 12:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.058 12:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:54.058 12:00:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.058 12:00:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.058 12:00:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.058 12:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.058 "name": "Existed_Raid", 00:08:54.058 "uuid": "a7c6d7b2-c32f-485c-b684-69a92814cd5a", 00:08:54.058 "strip_size_kb": 64, 00:08:54.058 "state": "online", 00:08:54.058 "raid_level": "raid0", 00:08:54.059 "superblock": true, 00:08:54.059 "num_base_bdevs": 3, 00:08:54.059 "num_base_bdevs_discovered": 3, 00:08:54.059 "num_base_bdevs_operational": 3, 00:08:54.059 "base_bdevs_list": [ 00:08:54.059 { 00:08:54.059 "name": "NewBaseBdev", 00:08:54.059 "uuid": "c51c28ab-eca8-4501-b679-802fa9979f59", 00:08:54.059 "is_configured": true, 00:08:54.059 "data_offset": 2048, 00:08:54.059 "data_size": 63488 00:08:54.059 }, 00:08:54.059 { 00:08:54.059 "name": "BaseBdev2", 00:08:54.059 "uuid": "48160fc8-f2e2-4590-b441-5fddb6b1b2fa", 00:08:54.059 "is_configured": true, 00:08:54.059 "data_offset": 2048, 00:08:54.059 "data_size": 63488 00:08:54.059 }, 00:08:54.059 { 00:08:54.059 "name": "BaseBdev3", 00:08:54.059 "uuid": "ab1d19f9-e531-4c3c-9519-1ff8d235206a", 00:08:54.059 "is_configured": true, 00:08:54.059 "data_offset": 2048, 00:08:54.059 "data_size": 63488 00:08:54.059 } 00:08:54.059 ] 00:08:54.059 }' 00:08:54.059 12:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.059 12:00:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.629 12:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:54.629 12:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:54.629 12:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:54.629 12:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:54.629 12:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:54.629 12:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:54.629 12:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:54.629 12:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:54.629 12:00:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.629 12:00:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.629 [2024-11-19 12:00:57.706341] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:54.629 12:00:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.629 12:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:54.629 "name": "Existed_Raid", 00:08:54.629 "aliases": [ 00:08:54.629 "a7c6d7b2-c32f-485c-b684-69a92814cd5a" 00:08:54.629 ], 00:08:54.629 "product_name": "Raid Volume", 00:08:54.629 "block_size": 512, 00:08:54.629 "num_blocks": 190464, 00:08:54.629 "uuid": "a7c6d7b2-c32f-485c-b684-69a92814cd5a", 00:08:54.629 "assigned_rate_limits": { 00:08:54.629 "rw_ios_per_sec": 0, 00:08:54.629 "rw_mbytes_per_sec": 0, 00:08:54.629 "r_mbytes_per_sec": 0, 00:08:54.629 "w_mbytes_per_sec": 0 00:08:54.629 }, 00:08:54.629 "claimed": false, 00:08:54.629 "zoned": false, 00:08:54.629 "supported_io_types": { 00:08:54.629 "read": true, 00:08:54.629 "write": true, 00:08:54.629 "unmap": true, 00:08:54.629 "flush": true, 00:08:54.629 "reset": true, 00:08:54.629 "nvme_admin": false, 00:08:54.629 "nvme_io": false, 00:08:54.629 "nvme_io_md": false, 00:08:54.629 "write_zeroes": true, 00:08:54.629 "zcopy": false, 00:08:54.629 "get_zone_info": false, 00:08:54.629 "zone_management": false, 00:08:54.629 "zone_append": false, 00:08:54.629 "compare": false, 00:08:54.629 "compare_and_write": false, 00:08:54.629 "abort": false, 00:08:54.629 "seek_hole": false, 00:08:54.629 "seek_data": false, 00:08:54.629 "copy": false, 00:08:54.629 "nvme_iov_md": false 00:08:54.629 }, 00:08:54.629 "memory_domains": [ 00:08:54.629 { 00:08:54.629 "dma_device_id": "system", 00:08:54.629 "dma_device_type": 1 00:08:54.629 }, 00:08:54.629 { 00:08:54.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.629 "dma_device_type": 2 00:08:54.629 }, 00:08:54.629 { 00:08:54.629 "dma_device_id": "system", 00:08:54.629 "dma_device_type": 1 00:08:54.629 }, 00:08:54.629 { 00:08:54.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.629 "dma_device_type": 2 00:08:54.629 }, 00:08:54.629 { 00:08:54.629 "dma_device_id": "system", 00:08:54.629 "dma_device_type": 1 00:08:54.629 }, 00:08:54.629 { 00:08:54.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.629 "dma_device_type": 2 00:08:54.629 } 00:08:54.629 ], 00:08:54.629 "driver_specific": { 00:08:54.629 "raid": { 00:08:54.629 "uuid": "a7c6d7b2-c32f-485c-b684-69a92814cd5a", 00:08:54.629 "strip_size_kb": 64, 00:08:54.629 "state": "online", 00:08:54.629 "raid_level": "raid0", 00:08:54.629 "superblock": true, 00:08:54.629 "num_base_bdevs": 3, 00:08:54.629 "num_base_bdevs_discovered": 3, 00:08:54.629 "num_base_bdevs_operational": 3, 00:08:54.629 "base_bdevs_list": [ 00:08:54.629 { 00:08:54.629 "name": "NewBaseBdev", 00:08:54.629 "uuid": "c51c28ab-eca8-4501-b679-802fa9979f59", 00:08:54.629 "is_configured": true, 00:08:54.629 "data_offset": 2048, 00:08:54.629 "data_size": 63488 00:08:54.629 }, 00:08:54.629 { 00:08:54.629 "name": "BaseBdev2", 00:08:54.629 "uuid": "48160fc8-f2e2-4590-b441-5fddb6b1b2fa", 00:08:54.629 "is_configured": true, 00:08:54.629 "data_offset": 2048, 00:08:54.629 "data_size": 63488 00:08:54.629 }, 00:08:54.629 { 00:08:54.629 "name": "BaseBdev3", 00:08:54.629 "uuid": "ab1d19f9-e531-4c3c-9519-1ff8d235206a", 00:08:54.629 "is_configured": true, 00:08:54.629 "data_offset": 2048, 00:08:54.629 "data_size": 63488 00:08:54.629 } 00:08:54.629 ] 00:08:54.629 } 00:08:54.629 } 00:08:54.629 }' 00:08:54.629 12:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:54.629 12:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:54.629 BaseBdev2 00:08:54.629 BaseBdev3' 00:08:54.629 12:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:54.629 12:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:54.629 12:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:54.629 12:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:54.629 12:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:54.629 12:00:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.629 12:00:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.629 12:00:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.629 12:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:54.629 12:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:54.629 12:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:54.629 12:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:54.629 12:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:54.629 12:00:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.629 12:00:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.629 12:00:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.629 12:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:54.629 12:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:54.629 12:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:54.629 12:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:54.629 12:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:54.629 12:00:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.629 12:00:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.629 12:00:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.629 12:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:54.629 12:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:54.629 12:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:54.629 12:00:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.629 12:00:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.629 [2024-11-19 12:00:57.993522] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:54.630 [2024-11-19 12:00:57.993552] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:54.630 [2024-11-19 12:00:57.993628] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:54.630 [2024-11-19 12:00:57.993683] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:54.630 [2024-11-19 12:00:57.993696] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:54.630 12:00:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.630 12:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64494 00:08:54.630 12:00:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 64494 ']' 00:08:54.630 12:00:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 64494 00:08:54.630 12:00:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:54.890 12:00:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:54.890 12:00:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64494 00:08:54.890 12:00:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:54.890 12:00:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:54.890 12:00:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64494' 00:08:54.890 killing process with pid 64494 00:08:54.890 12:00:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 64494 00:08:54.890 [2024-11-19 12:00:58.040234] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:54.890 12:00:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 64494 00:08:55.149 [2024-11-19 12:00:58.336401] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:56.124 12:00:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:56.124 00:08:56.124 real 0m10.602s 00:08:56.124 user 0m16.896s 00:08:56.124 sys 0m1.881s 00:08:56.124 12:00:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:56.124 12:00:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.124 ************************************ 00:08:56.124 END TEST raid_state_function_test_sb 00:08:56.124 ************************************ 00:08:56.124 12:00:59 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:08:56.124 12:00:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:56.124 12:00:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:56.124 12:00:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:56.384 ************************************ 00:08:56.384 START TEST raid_superblock_test 00:08:56.384 ************************************ 00:08:56.384 12:00:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:08:56.384 12:00:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:08:56.384 12:00:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:56.384 12:00:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:56.384 12:00:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:56.384 12:00:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:56.384 12:00:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:56.384 12:00:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:56.384 12:00:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:56.384 12:00:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:56.384 12:00:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:56.384 12:00:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:56.384 12:00:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:56.384 12:00:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:56.384 12:00:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:08:56.384 12:00:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:56.384 12:00:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:56.384 12:00:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65114 00:08:56.384 12:00:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:56.384 12:00:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65114 00:08:56.384 12:00:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 65114 ']' 00:08:56.384 12:00:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:56.384 12:00:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:56.384 12:00:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:56.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:56.384 12:00:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:56.385 12:00:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.385 [2024-11-19 12:00:59.593549] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:08:56.385 [2024-11-19 12:00:59.593755] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65114 ] 00:08:56.644 [2024-11-19 12:00:59.768252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.644 [2024-11-19 12:00:59.885178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.903 [2024-11-19 12:01:00.074272] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:56.903 [2024-11-19 12:01:00.074409] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:57.163 12:01:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:57.163 12:01:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:57.163 12:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:57.163 12:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:57.163 12:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:57.163 12:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:57.163 12:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:57.163 12:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:57.163 12:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:57.163 12:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:57.163 12:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:57.163 12:01:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.163 12:01:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.163 malloc1 00:08:57.163 12:01:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.163 12:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:57.163 12:01:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.163 12:01:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.163 [2024-11-19 12:01:00.479965] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:57.163 [2024-11-19 12:01:00.480096] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:57.163 [2024-11-19 12:01:00.480143] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:57.163 [2024-11-19 12:01:00.480174] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:57.163 [2024-11-19 12:01:00.482208] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:57.163 [2024-11-19 12:01:00.482275] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:57.163 pt1 00:08:57.163 12:01:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.163 12:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:57.163 12:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:57.163 12:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:57.163 12:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:57.163 12:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:57.163 12:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:57.163 12:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:57.163 12:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:57.163 12:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:57.163 12:01:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.163 12:01:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.163 malloc2 00:08:57.163 12:01:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.163 12:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:57.163 12:01:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.163 12:01:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.423 [2024-11-19 12:01:00.539402] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:57.423 [2024-11-19 12:01:00.539456] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:57.423 [2024-11-19 12:01:00.539481] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:57.423 [2024-11-19 12:01:00.539490] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:57.423 [2024-11-19 12:01:00.541747] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:57.423 [2024-11-19 12:01:00.541862] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:57.423 pt2 00:08:57.423 12:01:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.423 12:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:57.423 12:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:57.423 12:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:57.423 12:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:57.423 12:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:57.423 12:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:57.423 12:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:57.423 12:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:57.423 12:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:57.423 12:01:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.423 12:01:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.423 malloc3 00:08:57.423 12:01:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.423 12:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:57.423 12:01:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.423 12:01:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.423 [2024-11-19 12:01:00.605716] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:57.423 [2024-11-19 12:01:00.605804] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:57.423 [2024-11-19 12:01:00.605840] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:57.423 [2024-11-19 12:01:00.605867] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:57.423 [2024-11-19 12:01:00.607815] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:57.423 [2024-11-19 12:01:00.607883] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:57.423 pt3 00:08:57.423 12:01:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.423 12:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:57.423 12:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:57.423 12:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:57.423 12:01:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.423 12:01:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.423 [2024-11-19 12:01:00.617743] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:57.423 [2024-11-19 12:01:00.619456] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:57.423 [2024-11-19 12:01:00.619555] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:57.423 [2024-11-19 12:01:00.619717] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:57.423 [2024-11-19 12:01:00.619765] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:57.423 [2024-11-19 12:01:00.620025] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:57.423 [2024-11-19 12:01:00.620226] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:57.423 [2024-11-19 12:01:00.620268] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:57.423 [2024-11-19 12:01:00.620443] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:57.423 12:01:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.423 12:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:57.423 12:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:57.423 12:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:57.423 12:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:57.423 12:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:57.423 12:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:57.424 12:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.424 12:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.424 12:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.424 12:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.424 12:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.424 12:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:57.424 12:01:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.424 12:01:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.424 12:01:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.424 12:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.424 "name": "raid_bdev1", 00:08:57.424 "uuid": "0cd5e89e-15bd-4515-a8ee-f6ed7f9a0716", 00:08:57.424 "strip_size_kb": 64, 00:08:57.424 "state": "online", 00:08:57.424 "raid_level": "raid0", 00:08:57.424 "superblock": true, 00:08:57.424 "num_base_bdevs": 3, 00:08:57.424 "num_base_bdevs_discovered": 3, 00:08:57.424 "num_base_bdevs_operational": 3, 00:08:57.424 "base_bdevs_list": [ 00:08:57.424 { 00:08:57.424 "name": "pt1", 00:08:57.424 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:57.424 "is_configured": true, 00:08:57.424 "data_offset": 2048, 00:08:57.424 "data_size": 63488 00:08:57.424 }, 00:08:57.424 { 00:08:57.424 "name": "pt2", 00:08:57.424 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:57.424 "is_configured": true, 00:08:57.424 "data_offset": 2048, 00:08:57.424 "data_size": 63488 00:08:57.424 }, 00:08:57.424 { 00:08:57.424 "name": "pt3", 00:08:57.424 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:57.424 "is_configured": true, 00:08:57.424 "data_offset": 2048, 00:08:57.424 "data_size": 63488 00:08:57.424 } 00:08:57.424 ] 00:08:57.424 }' 00:08:57.424 12:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.424 12:01:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.684 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:57.684 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:57.684 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:57.684 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:57.684 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:57.684 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:57.684 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:57.684 12:01:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.684 12:01:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.684 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:57.684 [2024-11-19 12:01:01.025278] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:57.684 12:01:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.684 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:57.684 "name": "raid_bdev1", 00:08:57.684 "aliases": [ 00:08:57.684 "0cd5e89e-15bd-4515-a8ee-f6ed7f9a0716" 00:08:57.684 ], 00:08:57.684 "product_name": "Raid Volume", 00:08:57.684 "block_size": 512, 00:08:57.684 "num_blocks": 190464, 00:08:57.684 "uuid": "0cd5e89e-15bd-4515-a8ee-f6ed7f9a0716", 00:08:57.684 "assigned_rate_limits": { 00:08:57.684 "rw_ios_per_sec": 0, 00:08:57.684 "rw_mbytes_per_sec": 0, 00:08:57.684 "r_mbytes_per_sec": 0, 00:08:57.684 "w_mbytes_per_sec": 0 00:08:57.684 }, 00:08:57.684 "claimed": false, 00:08:57.684 "zoned": false, 00:08:57.684 "supported_io_types": { 00:08:57.684 "read": true, 00:08:57.684 "write": true, 00:08:57.684 "unmap": true, 00:08:57.684 "flush": true, 00:08:57.684 "reset": true, 00:08:57.684 "nvme_admin": false, 00:08:57.684 "nvme_io": false, 00:08:57.684 "nvme_io_md": false, 00:08:57.684 "write_zeroes": true, 00:08:57.684 "zcopy": false, 00:08:57.684 "get_zone_info": false, 00:08:57.684 "zone_management": false, 00:08:57.684 "zone_append": false, 00:08:57.684 "compare": false, 00:08:57.684 "compare_and_write": false, 00:08:57.684 "abort": false, 00:08:57.684 "seek_hole": false, 00:08:57.684 "seek_data": false, 00:08:57.684 "copy": false, 00:08:57.684 "nvme_iov_md": false 00:08:57.684 }, 00:08:57.684 "memory_domains": [ 00:08:57.684 { 00:08:57.684 "dma_device_id": "system", 00:08:57.684 "dma_device_type": 1 00:08:57.684 }, 00:08:57.684 { 00:08:57.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.684 "dma_device_type": 2 00:08:57.684 }, 00:08:57.684 { 00:08:57.684 "dma_device_id": "system", 00:08:57.684 "dma_device_type": 1 00:08:57.684 }, 00:08:57.684 { 00:08:57.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.684 "dma_device_type": 2 00:08:57.684 }, 00:08:57.684 { 00:08:57.684 "dma_device_id": "system", 00:08:57.684 "dma_device_type": 1 00:08:57.684 }, 00:08:57.684 { 00:08:57.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.684 "dma_device_type": 2 00:08:57.684 } 00:08:57.684 ], 00:08:57.684 "driver_specific": { 00:08:57.684 "raid": { 00:08:57.684 "uuid": "0cd5e89e-15bd-4515-a8ee-f6ed7f9a0716", 00:08:57.684 "strip_size_kb": 64, 00:08:57.684 "state": "online", 00:08:57.684 "raid_level": "raid0", 00:08:57.684 "superblock": true, 00:08:57.684 "num_base_bdevs": 3, 00:08:57.684 "num_base_bdevs_discovered": 3, 00:08:57.684 "num_base_bdevs_operational": 3, 00:08:57.684 "base_bdevs_list": [ 00:08:57.684 { 00:08:57.684 "name": "pt1", 00:08:57.684 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:57.684 "is_configured": true, 00:08:57.684 "data_offset": 2048, 00:08:57.684 "data_size": 63488 00:08:57.684 }, 00:08:57.684 { 00:08:57.684 "name": "pt2", 00:08:57.684 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:57.684 "is_configured": true, 00:08:57.684 "data_offset": 2048, 00:08:57.684 "data_size": 63488 00:08:57.684 }, 00:08:57.684 { 00:08:57.684 "name": "pt3", 00:08:57.684 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:57.684 "is_configured": true, 00:08:57.684 "data_offset": 2048, 00:08:57.684 "data_size": 63488 00:08:57.684 } 00:08:57.684 ] 00:08:57.684 } 00:08:57.684 } 00:08:57.684 }' 00:08:57.684 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:57.944 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:57.944 pt2 00:08:57.944 pt3' 00:08:57.944 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:57.944 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:57.944 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:57.944 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:57.944 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:57.944 12:01:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.944 12:01:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.944 12:01:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.944 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:57.944 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:57.944 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:57.944 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:57.944 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:57.944 12:01:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.944 12:01:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.944 12:01:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.944 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:57.944 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:57.944 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:57.944 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:57.944 12:01:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.944 12:01:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.944 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:57.944 12:01:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.944 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:57.944 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:57.944 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:57.944 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:57.944 12:01:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.944 12:01:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.944 [2024-11-19 12:01:01.260825] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:57.944 12:01:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.944 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=0cd5e89e-15bd-4515-a8ee-f6ed7f9a0716 00:08:57.944 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 0cd5e89e-15bd-4515-a8ee-f6ed7f9a0716 ']' 00:08:57.944 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:57.944 12:01:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.944 12:01:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.944 [2024-11-19 12:01:01.308514] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:57.944 [2024-11-19 12:01:01.308574] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:57.944 [2024-11-19 12:01:01.308656] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:57.944 [2024-11-19 12:01:01.308730] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:57.944 [2024-11-19 12:01:01.308761] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:57.944 12:01:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.944 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.944 12:01:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.944 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:57.944 12:01:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.204 12:01:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.204 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:58.204 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:58.204 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:58.204 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:58.204 12:01:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.204 12:01:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.204 12:01:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.204 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:58.204 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:58.204 12:01:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.204 12:01:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.204 12:01:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.204 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:58.204 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:58.204 12:01:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.204 12:01:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.204 12:01:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.204 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:58.204 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:58.204 12:01:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.204 12:01:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.204 12:01:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.204 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:58.204 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:58.204 12:01:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:58.204 12:01:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:58.204 12:01:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:58.204 12:01:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:58.204 12:01:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:58.204 12:01:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:58.204 12:01:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:58.204 12:01:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.204 12:01:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.205 [2024-11-19 12:01:01.440324] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:58.205 [2024-11-19 12:01:01.442154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:58.205 [2024-11-19 12:01:01.442202] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:58.205 [2024-11-19 12:01:01.442246] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:58.205 [2024-11-19 12:01:01.442289] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:58.205 [2024-11-19 12:01:01.442306] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:58.205 [2024-11-19 12:01:01.442321] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:58.205 [2024-11-19 12:01:01.442331] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:58.205 request: 00:08:58.205 { 00:08:58.205 "name": "raid_bdev1", 00:08:58.205 "raid_level": "raid0", 00:08:58.205 "base_bdevs": [ 00:08:58.205 "malloc1", 00:08:58.205 "malloc2", 00:08:58.205 "malloc3" 00:08:58.205 ], 00:08:58.205 "strip_size_kb": 64, 00:08:58.205 "superblock": false, 00:08:58.205 "method": "bdev_raid_create", 00:08:58.205 "req_id": 1 00:08:58.205 } 00:08:58.205 Got JSON-RPC error response 00:08:58.205 response: 00:08:58.205 { 00:08:58.205 "code": -17, 00:08:58.205 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:58.205 } 00:08:58.205 12:01:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:58.205 12:01:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:58.205 12:01:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:58.205 12:01:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:58.205 12:01:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:58.205 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.205 12:01:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.205 12:01:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.205 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:58.205 12:01:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.205 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:58.205 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:58.205 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:58.205 12:01:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.205 12:01:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.205 [2024-11-19 12:01:01.496207] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:58.205 [2024-11-19 12:01:01.496302] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:58.205 [2024-11-19 12:01:01.496339] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:58.205 [2024-11-19 12:01:01.496366] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:58.205 [2024-11-19 12:01:01.498520] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:58.205 [2024-11-19 12:01:01.498601] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:58.205 [2024-11-19 12:01:01.498705] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:58.205 [2024-11-19 12:01:01.498799] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:58.205 pt1 00:08:58.205 12:01:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.205 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:58.205 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:58.205 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:58.205 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:58.205 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.205 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:58.205 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.205 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.205 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.205 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.205 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.205 12:01:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.205 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:58.205 12:01:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.205 12:01:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.205 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.205 "name": "raid_bdev1", 00:08:58.205 "uuid": "0cd5e89e-15bd-4515-a8ee-f6ed7f9a0716", 00:08:58.205 "strip_size_kb": 64, 00:08:58.205 "state": "configuring", 00:08:58.205 "raid_level": "raid0", 00:08:58.205 "superblock": true, 00:08:58.205 "num_base_bdevs": 3, 00:08:58.205 "num_base_bdevs_discovered": 1, 00:08:58.205 "num_base_bdevs_operational": 3, 00:08:58.205 "base_bdevs_list": [ 00:08:58.205 { 00:08:58.205 "name": "pt1", 00:08:58.205 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:58.205 "is_configured": true, 00:08:58.205 "data_offset": 2048, 00:08:58.205 "data_size": 63488 00:08:58.205 }, 00:08:58.205 { 00:08:58.205 "name": null, 00:08:58.205 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:58.205 "is_configured": false, 00:08:58.205 "data_offset": 2048, 00:08:58.205 "data_size": 63488 00:08:58.205 }, 00:08:58.205 { 00:08:58.205 "name": null, 00:08:58.205 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:58.205 "is_configured": false, 00:08:58.205 "data_offset": 2048, 00:08:58.205 "data_size": 63488 00:08:58.205 } 00:08:58.205 ] 00:08:58.205 }' 00:08:58.205 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.205 12:01:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.775 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:08:58.775 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:58.775 12:01:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.775 12:01:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.775 [2024-11-19 12:01:01.947481] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:58.775 [2024-11-19 12:01:01.947602] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:58.775 [2024-11-19 12:01:01.947644] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:08:58.775 [2024-11-19 12:01:01.947673] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:58.775 [2024-11-19 12:01:01.948139] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:58.775 [2024-11-19 12:01:01.948196] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:58.775 [2024-11-19 12:01:01.948306] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:58.775 [2024-11-19 12:01:01.948356] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:58.775 pt2 00:08:58.775 12:01:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.775 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:58.775 12:01:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.775 12:01:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.775 [2024-11-19 12:01:01.955461] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:58.775 12:01:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.775 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:58.775 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:58.775 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:58.775 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:58.775 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.775 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:58.775 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.775 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.775 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.775 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.775 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.775 12:01:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.775 12:01:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.775 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:58.775 12:01:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.775 12:01:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.775 "name": "raid_bdev1", 00:08:58.775 "uuid": "0cd5e89e-15bd-4515-a8ee-f6ed7f9a0716", 00:08:58.775 "strip_size_kb": 64, 00:08:58.775 "state": "configuring", 00:08:58.775 "raid_level": "raid0", 00:08:58.775 "superblock": true, 00:08:58.775 "num_base_bdevs": 3, 00:08:58.775 "num_base_bdevs_discovered": 1, 00:08:58.775 "num_base_bdevs_operational": 3, 00:08:58.775 "base_bdevs_list": [ 00:08:58.775 { 00:08:58.775 "name": "pt1", 00:08:58.775 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:58.775 "is_configured": true, 00:08:58.775 "data_offset": 2048, 00:08:58.775 "data_size": 63488 00:08:58.775 }, 00:08:58.775 { 00:08:58.775 "name": null, 00:08:58.775 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:58.775 "is_configured": false, 00:08:58.775 "data_offset": 0, 00:08:58.775 "data_size": 63488 00:08:58.775 }, 00:08:58.775 { 00:08:58.775 "name": null, 00:08:58.775 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:58.775 "is_configured": false, 00:08:58.775 "data_offset": 2048, 00:08:58.775 "data_size": 63488 00:08:58.775 } 00:08:58.775 ] 00:08:58.775 }' 00:08:58.775 12:01:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.775 12:01:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.035 12:01:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:59.035 12:01:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:59.035 12:01:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:59.035 12:01:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.035 12:01:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.035 [2024-11-19 12:01:02.406668] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:59.035 [2024-11-19 12:01:02.406802] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:59.035 [2024-11-19 12:01:02.406838] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:08:59.035 [2024-11-19 12:01:02.406869] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:59.035 [2024-11-19 12:01:02.407358] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:59.035 [2024-11-19 12:01:02.407417] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:59.035 [2024-11-19 12:01:02.407525] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:59.035 [2024-11-19 12:01:02.407579] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:59.294 pt2 00:08:59.294 12:01:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.294 12:01:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:59.294 12:01:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:59.294 12:01:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:59.294 12:01:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.294 12:01:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.294 [2024-11-19 12:01:02.418624] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:59.294 [2024-11-19 12:01:02.418706] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:59.294 [2024-11-19 12:01:02.418735] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:08:59.294 [2024-11-19 12:01:02.418763] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:59.294 [2024-11-19 12:01:02.419203] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:59.294 [2024-11-19 12:01:02.419267] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:59.294 [2024-11-19 12:01:02.419360] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:59.294 [2024-11-19 12:01:02.419412] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:59.294 [2024-11-19 12:01:02.419568] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:59.294 [2024-11-19 12:01:02.419613] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:59.294 [2024-11-19 12:01:02.419907] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:59.294 [2024-11-19 12:01:02.420116] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:59.294 [2024-11-19 12:01:02.420160] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:59.294 [2024-11-19 12:01:02.420360] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:59.294 pt3 00:08:59.294 12:01:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.294 12:01:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:59.294 12:01:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:59.294 12:01:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:59.294 12:01:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:59.294 12:01:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:59.294 12:01:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:59.294 12:01:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.294 12:01:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.294 12:01:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.294 12:01:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.294 12:01:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.294 12:01:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.294 12:01:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.294 12:01:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.294 12:01:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.294 12:01:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:59.294 12:01:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.294 12:01:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.294 "name": "raid_bdev1", 00:08:59.294 "uuid": "0cd5e89e-15bd-4515-a8ee-f6ed7f9a0716", 00:08:59.294 "strip_size_kb": 64, 00:08:59.294 "state": "online", 00:08:59.294 "raid_level": "raid0", 00:08:59.294 "superblock": true, 00:08:59.294 "num_base_bdevs": 3, 00:08:59.294 "num_base_bdevs_discovered": 3, 00:08:59.294 "num_base_bdevs_operational": 3, 00:08:59.294 "base_bdevs_list": [ 00:08:59.294 { 00:08:59.294 "name": "pt1", 00:08:59.294 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:59.294 "is_configured": true, 00:08:59.294 "data_offset": 2048, 00:08:59.294 "data_size": 63488 00:08:59.294 }, 00:08:59.294 { 00:08:59.294 "name": "pt2", 00:08:59.294 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:59.294 "is_configured": true, 00:08:59.294 "data_offset": 2048, 00:08:59.294 "data_size": 63488 00:08:59.294 }, 00:08:59.294 { 00:08:59.294 "name": "pt3", 00:08:59.294 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:59.294 "is_configured": true, 00:08:59.294 "data_offset": 2048, 00:08:59.294 "data_size": 63488 00:08:59.294 } 00:08:59.294 ] 00:08:59.294 }' 00:08:59.294 12:01:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.294 12:01:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.554 12:01:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:59.554 12:01:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:59.554 12:01:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:59.554 12:01:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:59.554 12:01:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:59.554 12:01:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:59.554 12:01:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:59.554 12:01:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:59.554 12:01:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.554 12:01:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.554 [2024-11-19 12:01:02.874183] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:59.554 12:01:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.554 12:01:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:59.554 "name": "raid_bdev1", 00:08:59.554 "aliases": [ 00:08:59.554 "0cd5e89e-15bd-4515-a8ee-f6ed7f9a0716" 00:08:59.554 ], 00:08:59.554 "product_name": "Raid Volume", 00:08:59.554 "block_size": 512, 00:08:59.554 "num_blocks": 190464, 00:08:59.554 "uuid": "0cd5e89e-15bd-4515-a8ee-f6ed7f9a0716", 00:08:59.554 "assigned_rate_limits": { 00:08:59.554 "rw_ios_per_sec": 0, 00:08:59.554 "rw_mbytes_per_sec": 0, 00:08:59.554 "r_mbytes_per_sec": 0, 00:08:59.554 "w_mbytes_per_sec": 0 00:08:59.554 }, 00:08:59.554 "claimed": false, 00:08:59.554 "zoned": false, 00:08:59.554 "supported_io_types": { 00:08:59.554 "read": true, 00:08:59.554 "write": true, 00:08:59.554 "unmap": true, 00:08:59.554 "flush": true, 00:08:59.554 "reset": true, 00:08:59.554 "nvme_admin": false, 00:08:59.554 "nvme_io": false, 00:08:59.554 "nvme_io_md": false, 00:08:59.554 "write_zeroes": true, 00:08:59.554 "zcopy": false, 00:08:59.554 "get_zone_info": false, 00:08:59.554 "zone_management": false, 00:08:59.554 "zone_append": false, 00:08:59.554 "compare": false, 00:08:59.554 "compare_and_write": false, 00:08:59.554 "abort": false, 00:08:59.554 "seek_hole": false, 00:08:59.554 "seek_data": false, 00:08:59.554 "copy": false, 00:08:59.554 "nvme_iov_md": false 00:08:59.554 }, 00:08:59.554 "memory_domains": [ 00:08:59.554 { 00:08:59.554 "dma_device_id": "system", 00:08:59.554 "dma_device_type": 1 00:08:59.554 }, 00:08:59.554 { 00:08:59.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.554 "dma_device_type": 2 00:08:59.554 }, 00:08:59.554 { 00:08:59.554 "dma_device_id": "system", 00:08:59.554 "dma_device_type": 1 00:08:59.554 }, 00:08:59.554 { 00:08:59.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.554 "dma_device_type": 2 00:08:59.554 }, 00:08:59.554 { 00:08:59.554 "dma_device_id": "system", 00:08:59.554 "dma_device_type": 1 00:08:59.554 }, 00:08:59.554 { 00:08:59.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.554 "dma_device_type": 2 00:08:59.554 } 00:08:59.554 ], 00:08:59.554 "driver_specific": { 00:08:59.554 "raid": { 00:08:59.554 "uuid": "0cd5e89e-15bd-4515-a8ee-f6ed7f9a0716", 00:08:59.554 "strip_size_kb": 64, 00:08:59.554 "state": "online", 00:08:59.554 "raid_level": "raid0", 00:08:59.554 "superblock": true, 00:08:59.554 "num_base_bdevs": 3, 00:08:59.554 "num_base_bdevs_discovered": 3, 00:08:59.554 "num_base_bdevs_operational": 3, 00:08:59.554 "base_bdevs_list": [ 00:08:59.554 { 00:08:59.554 "name": "pt1", 00:08:59.554 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:59.554 "is_configured": true, 00:08:59.554 "data_offset": 2048, 00:08:59.554 "data_size": 63488 00:08:59.554 }, 00:08:59.554 { 00:08:59.554 "name": "pt2", 00:08:59.554 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:59.554 "is_configured": true, 00:08:59.554 "data_offset": 2048, 00:08:59.554 "data_size": 63488 00:08:59.554 }, 00:08:59.554 { 00:08:59.554 "name": "pt3", 00:08:59.554 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:59.554 "is_configured": true, 00:08:59.554 "data_offset": 2048, 00:08:59.554 "data_size": 63488 00:08:59.554 } 00:08:59.554 ] 00:08:59.554 } 00:08:59.554 } 00:08:59.554 }' 00:08:59.554 12:01:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:59.814 12:01:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:59.814 pt2 00:08:59.814 pt3' 00:08:59.814 12:01:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:59.814 12:01:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:59.814 12:01:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:59.814 12:01:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:59.814 12:01:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:59.814 12:01:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.814 12:01:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.814 12:01:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.814 12:01:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:59.814 12:01:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:59.814 12:01:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:59.814 12:01:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:59.814 12:01:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:59.814 12:01:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.814 12:01:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.814 12:01:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.814 12:01:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:59.814 12:01:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:59.814 12:01:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:59.814 12:01:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:59.814 12:01:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.814 12:01:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.814 12:01:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:59.814 12:01:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.815 12:01:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:59.815 12:01:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:59.815 12:01:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:59.815 12:01:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:59.815 12:01:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.815 12:01:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.815 [2024-11-19 12:01:03.133686] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:59.815 12:01:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.815 12:01:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 0cd5e89e-15bd-4515-a8ee-f6ed7f9a0716 '!=' 0cd5e89e-15bd-4515-a8ee-f6ed7f9a0716 ']' 00:08:59.815 12:01:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:08:59.815 12:01:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:59.815 12:01:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:59.815 12:01:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65114 00:08:59.815 12:01:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 65114 ']' 00:08:59.815 12:01:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 65114 00:08:59.815 12:01:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:59.815 12:01:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:59.815 12:01:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65114 00:09:00.075 killing process with pid 65114 00:09:00.075 12:01:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:00.075 12:01:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:00.075 12:01:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65114' 00:09:00.075 12:01:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 65114 00:09:00.075 [2024-11-19 12:01:03.205910] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:00.075 [2024-11-19 12:01:03.206023] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:00.075 [2024-11-19 12:01:03.206084] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:00.075 [2024-11-19 12:01:03.206095] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:00.075 12:01:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 65114 00:09:00.334 [2024-11-19 12:01:03.523554] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:01.274 12:01:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:01.274 00:09:01.274 real 0m5.101s 00:09:01.274 user 0m7.278s 00:09:01.274 sys 0m0.863s 00:09:01.274 12:01:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:01.274 12:01:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.274 ************************************ 00:09:01.274 END TEST raid_superblock_test 00:09:01.274 ************************************ 00:09:01.534 12:01:04 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:09:01.534 12:01:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:01.534 12:01:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:01.534 12:01:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:01.534 ************************************ 00:09:01.534 START TEST raid_read_error_test 00:09:01.534 ************************************ 00:09:01.534 12:01:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:09:01.534 12:01:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:01.534 12:01:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:01.534 12:01:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:01.534 12:01:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:01.534 12:01:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:01.534 12:01:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:01.534 12:01:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:01.534 12:01:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:01.534 12:01:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:01.534 12:01:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:01.534 12:01:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:01.534 12:01:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:01.534 12:01:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:01.534 12:01:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:01.534 12:01:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:01.534 12:01:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:01.534 12:01:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:01.534 12:01:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:01.534 12:01:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:01.534 12:01:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:01.534 12:01:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:01.534 12:01:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:01.534 12:01:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:01.534 12:01:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:01.534 12:01:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:01.534 12:01:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.flAcK1Uwpu 00:09:01.534 12:01:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65367 00:09:01.534 12:01:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:01.534 12:01:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65367 00:09:01.534 12:01:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 65367 ']' 00:09:01.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.534 12:01:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.534 12:01:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:01.534 12:01:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.534 12:01:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:01.534 12:01:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.534 [2024-11-19 12:01:04.783386] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:09:01.534 [2024-11-19 12:01:04.783510] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65367 ] 00:09:01.794 [2024-11-19 12:01:04.957715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.794 [2024-11-19 12:01:05.068305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.054 [2024-11-19 12:01:05.263200] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:02.054 [2024-11-19 12:01:05.263330] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:02.314 12:01:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:02.314 12:01:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:02.314 12:01:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:02.314 12:01:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:02.314 12:01:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.314 12:01:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.314 BaseBdev1_malloc 00:09:02.314 12:01:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.314 12:01:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:02.314 12:01:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.314 12:01:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.314 true 00:09:02.314 12:01:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.314 12:01:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:02.314 12:01:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.314 12:01:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.314 [2024-11-19 12:01:05.676465] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:02.314 [2024-11-19 12:01:05.676516] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:02.314 [2024-11-19 12:01:05.676534] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:02.314 [2024-11-19 12:01:05.676544] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:02.314 [2024-11-19 12:01:05.678495] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:02.314 [2024-11-19 12:01:05.678534] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:02.314 BaseBdev1 00:09:02.314 12:01:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.314 12:01:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:02.314 12:01:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:02.314 12:01:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.314 12:01:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.574 BaseBdev2_malloc 00:09:02.574 12:01:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.574 12:01:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:02.574 12:01:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.574 12:01:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.574 true 00:09:02.574 12:01:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.574 12:01:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:02.574 12:01:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.574 12:01:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.574 [2024-11-19 12:01:05.739959] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:02.574 [2024-11-19 12:01:05.740066] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:02.574 [2024-11-19 12:01:05.740085] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:02.574 [2024-11-19 12:01:05.740095] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:02.574 [2024-11-19 12:01:05.742058] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:02.574 [2024-11-19 12:01:05.742096] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:02.574 BaseBdev2 00:09:02.574 12:01:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.574 12:01:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:02.574 12:01:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:02.574 12:01:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.574 12:01:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.574 BaseBdev3_malloc 00:09:02.574 12:01:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.574 12:01:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:02.574 12:01:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.574 12:01:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.574 true 00:09:02.574 12:01:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.574 12:01:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:02.574 12:01:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.574 12:01:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.574 [2024-11-19 12:01:05.837311] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:02.574 [2024-11-19 12:01:05.837357] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:02.574 [2024-11-19 12:01:05.837373] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:02.574 [2024-11-19 12:01:05.837383] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:02.574 [2024-11-19 12:01:05.839412] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:02.574 [2024-11-19 12:01:05.839505] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:02.574 BaseBdev3 00:09:02.574 12:01:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.574 12:01:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:02.574 12:01:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.574 12:01:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.574 [2024-11-19 12:01:05.849360] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:02.574 [2024-11-19 12:01:05.851087] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:02.575 [2024-11-19 12:01:05.851165] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:02.575 [2024-11-19 12:01:05.851342] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:02.575 [2024-11-19 12:01:05.851355] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:02.575 [2024-11-19 12:01:05.851590] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:02.575 [2024-11-19 12:01:05.851729] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:02.575 [2024-11-19 12:01:05.851742] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:02.575 [2024-11-19 12:01:05.851874] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:02.575 12:01:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.575 12:01:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:02.575 12:01:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:02.575 12:01:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:02.575 12:01:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:02.575 12:01:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:02.575 12:01:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:02.575 12:01:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.575 12:01:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.575 12:01:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.575 12:01:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.575 12:01:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.575 12:01:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:02.575 12:01:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.575 12:01:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.575 12:01:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.575 12:01:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.575 "name": "raid_bdev1", 00:09:02.575 "uuid": "26e56fd3-d2b2-42e2-976a-4b435d090671", 00:09:02.575 "strip_size_kb": 64, 00:09:02.575 "state": "online", 00:09:02.575 "raid_level": "raid0", 00:09:02.575 "superblock": true, 00:09:02.575 "num_base_bdevs": 3, 00:09:02.575 "num_base_bdevs_discovered": 3, 00:09:02.575 "num_base_bdevs_operational": 3, 00:09:02.575 "base_bdevs_list": [ 00:09:02.575 { 00:09:02.575 "name": "BaseBdev1", 00:09:02.575 "uuid": "7569ab05-b8cb-5330-b153-82c39e754c64", 00:09:02.575 "is_configured": true, 00:09:02.575 "data_offset": 2048, 00:09:02.575 "data_size": 63488 00:09:02.575 }, 00:09:02.575 { 00:09:02.575 "name": "BaseBdev2", 00:09:02.575 "uuid": "8ed7226b-5ea4-5921-bd89-b05ee418b787", 00:09:02.575 "is_configured": true, 00:09:02.575 "data_offset": 2048, 00:09:02.575 "data_size": 63488 00:09:02.575 }, 00:09:02.575 { 00:09:02.575 "name": "BaseBdev3", 00:09:02.575 "uuid": "7d1c5c37-0874-5c1d-968d-a6769b8b1fd1", 00:09:02.575 "is_configured": true, 00:09:02.575 "data_offset": 2048, 00:09:02.575 "data_size": 63488 00:09:02.575 } 00:09:02.575 ] 00:09:02.575 }' 00:09:02.575 12:01:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.575 12:01:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.144 12:01:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:03.144 12:01:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:03.144 [2024-11-19 12:01:06.329852] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:04.083 12:01:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:04.084 12:01:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.084 12:01:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.084 12:01:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.084 12:01:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:04.084 12:01:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:04.084 12:01:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:04.084 12:01:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:04.084 12:01:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:04.084 12:01:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:04.084 12:01:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:04.084 12:01:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.084 12:01:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:04.084 12:01:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.084 12:01:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.084 12:01:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.084 12:01:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.084 12:01:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:04.084 12:01:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.084 12:01:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.084 12:01:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.084 12:01:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.084 12:01:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.084 "name": "raid_bdev1", 00:09:04.084 "uuid": "26e56fd3-d2b2-42e2-976a-4b435d090671", 00:09:04.084 "strip_size_kb": 64, 00:09:04.084 "state": "online", 00:09:04.084 "raid_level": "raid0", 00:09:04.084 "superblock": true, 00:09:04.084 "num_base_bdevs": 3, 00:09:04.084 "num_base_bdevs_discovered": 3, 00:09:04.084 "num_base_bdevs_operational": 3, 00:09:04.084 "base_bdevs_list": [ 00:09:04.084 { 00:09:04.084 "name": "BaseBdev1", 00:09:04.084 "uuid": "7569ab05-b8cb-5330-b153-82c39e754c64", 00:09:04.084 "is_configured": true, 00:09:04.084 "data_offset": 2048, 00:09:04.084 "data_size": 63488 00:09:04.084 }, 00:09:04.084 { 00:09:04.084 "name": "BaseBdev2", 00:09:04.084 "uuid": "8ed7226b-5ea4-5921-bd89-b05ee418b787", 00:09:04.084 "is_configured": true, 00:09:04.084 "data_offset": 2048, 00:09:04.084 "data_size": 63488 00:09:04.084 }, 00:09:04.084 { 00:09:04.084 "name": "BaseBdev3", 00:09:04.084 "uuid": "7d1c5c37-0874-5c1d-968d-a6769b8b1fd1", 00:09:04.084 "is_configured": true, 00:09:04.084 "data_offset": 2048, 00:09:04.084 "data_size": 63488 00:09:04.084 } 00:09:04.084 ] 00:09:04.084 }' 00:09:04.084 12:01:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.084 12:01:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.344 12:01:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:04.344 12:01:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.344 12:01:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.344 [2024-11-19 12:01:07.659363] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:04.344 [2024-11-19 12:01:07.659393] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:04.344 [2024-11-19 12:01:07.661937] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:04.344 [2024-11-19 12:01:07.661974] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:04.344 [2024-11-19 12:01:07.662081] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:04.344 [2024-11-19 12:01:07.662117] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:04.344 12:01:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.344 { 00:09:04.344 "results": [ 00:09:04.344 { 00:09:04.344 "job": "raid_bdev1", 00:09:04.344 "core_mask": "0x1", 00:09:04.344 "workload": "randrw", 00:09:04.344 "percentage": 50, 00:09:04.344 "status": "finished", 00:09:04.344 "queue_depth": 1, 00:09:04.344 "io_size": 131072, 00:09:04.344 "runtime": 1.330281, 00:09:04.344 "iops": 16765.63072012605, 00:09:04.344 "mibps": 2095.703840015756, 00:09:04.344 "io_failed": 1, 00:09:04.344 "io_timeout": 0, 00:09:04.344 "avg_latency_us": 82.91300082073516, 00:09:04.344 "min_latency_us": 17.77467248908297, 00:09:04.344 "max_latency_us": 1387.989519650655 00:09:04.344 } 00:09:04.344 ], 00:09:04.344 "core_count": 1 00:09:04.344 } 00:09:04.344 12:01:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65367 00:09:04.344 12:01:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 65367 ']' 00:09:04.344 12:01:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 65367 00:09:04.344 12:01:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:04.344 12:01:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:04.344 12:01:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65367 00:09:04.344 12:01:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:04.344 12:01:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:04.344 12:01:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65367' 00:09:04.344 killing process with pid 65367 00:09:04.344 12:01:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 65367 00:09:04.344 [2024-11-19 12:01:07.707329] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:04.344 12:01:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 65367 00:09:04.604 [2024-11-19 12:01:07.924477] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:05.988 12:01:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.flAcK1Uwpu 00:09:05.988 12:01:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:05.988 12:01:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:05.988 12:01:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:09:05.988 12:01:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:05.988 12:01:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:05.988 12:01:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:05.988 12:01:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:09:05.988 00:09:05.988 real 0m4.375s 00:09:05.988 user 0m5.116s 00:09:05.988 sys 0m0.569s 00:09:05.988 12:01:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:05.988 12:01:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.988 ************************************ 00:09:05.988 END TEST raid_read_error_test 00:09:05.988 ************************************ 00:09:05.988 12:01:09 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:09:05.988 12:01:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:05.988 12:01:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:05.988 12:01:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:05.988 ************************************ 00:09:05.988 START TEST raid_write_error_test 00:09:05.988 ************************************ 00:09:05.988 12:01:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:09:05.988 12:01:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:05.988 12:01:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:05.988 12:01:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:05.988 12:01:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:05.988 12:01:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:05.988 12:01:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:05.988 12:01:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:05.988 12:01:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:05.988 12:01:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:05.989 12:01:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:05.989 12:01:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:05.989 12:01:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:05.989 12:01:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:05.989 12:01:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:05.989 12:01:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:05.989 12:01:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:05.989 12:01:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:05.989 12:01:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:05.989 12:01:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:05.989 12:01:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:05.989 12:01:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:05.989 12:01:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:05.989 12:01:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:05.989 12:01:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:05.989 12:01:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:05.989 12:01:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.cLnqnmXrnt 00:09:05.989 12:01:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65513 00:09:05.989 12:01:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:05.989 12:01:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65513 00:09:05.989 12:01:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 65513 ']' 00:09:05.989 12:01:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:05.989 12:01:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:05.989 12:01:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:05.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:05.989 12:01:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:05.989 12:01:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.989 [2024-11-19 12:01:09.246870] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:09:05.989 [2024-11-19 12:01:09.247133] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65513 ] 00:09:06.248 [2024-11-19 12:01:09.424512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.248 [2024-11-19 12:01:09.540963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.507 [2024-11-19 12:01:09.739146] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:06.507 [2024-11-19 12:01:09.739252] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:06.767 12:01:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:06.767 12:01:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:06.767 12:01:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:06.767 12:01:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:06.767 12:01:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.767 12:01:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.767 BaseBdev1_malloc 00:09:06.767 12:01:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.767 12:01:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:06.767 12:01:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.767 12:01:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.767 true 00:09:06.767 12:01:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.767 12:01:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:06.767 12:01:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.767 12:01:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.767 [2024-11-19 12:01:10.136757] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:06.767 [2024-11-19 12:01:10.136812] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:06.767 [2024-11-19 12:01:10.136830] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:06.767 [2024-11-19 12:01:10.136840] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:06.767 [2024-11-19 12:01:10.138896] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:06.767 [2024-11-19 12:01:10.139049] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:07.027 BaseBdev1 00:09:07.027 12:01:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.027 12:01:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:07.027 12:01:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:07.027 12:01:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.027 12:01:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.027 BaseBdev2_malloc 00:09:07.027 12:01:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.027 12:01:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:07.027 12:01:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.027 12:01:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.027 true 00:09:07.027 12:01:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.027 12:01:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:07.027 12:01:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.028 12:01:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.028 [2024-11-19 12:01:10.204420] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:07.028 [2024-11-19 12:01:10.204529] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:07.028 [2024-11-19 12:01:10.204553] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:07.028 [2024-11-19 12:01:10.204566] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:07.028 [2024-11-19 12:01:10.206890] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:07.028 [2024-11-19 12:01:10.206932] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:07.028 BaseBdev2 00:09:07.028 12:01:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.028 12:01:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:07.028 12:01:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:07.028 12:01:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.028 12:01:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.028 BaseBdev3_malloc 00:09:07.028 12:01:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.028 12:01:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:07.028 12:01:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.028 12:01:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.028 true 00:09:07.028 12:01:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.028 12:01:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:07.028 12:01:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.028 12:01:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.028 [2024-11-19 12:01:10.281836] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:07.028 [2024-11-19 12:01:10.281889] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:07.028 [2024-11-19 12:01:10.281905] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:07.028 [2024-11-19 12:01:10.281915] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:07.028 [2024-11-19 12:01:10.283993] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:07.028 [2024-11-19 12:01:10.284044] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:07.028 BaseBdev3 00:09:07.028 12:01:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.028 12:01:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:07.028 12:01:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.028 12:01:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.028 [2024-11-19 12:01:10.293884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:07.028 [2024-11-19 12:01:10.295678] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:07.028 [2024-11-19 12:01:10.295755] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:07.028 [2024-11-19 12:01:10.295952] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:07.028 [2024-11-19 12:01:10.295966] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:07.028 [2024-11-19 12:01:10.296223] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:07.028 [2024-11-19 12:01:10.296361] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:07.028 [2024-11-19 12:01:10.296384] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:07.028 [2024-11-19 12:01:10.296512] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:07.028 12:01:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.028 12:01:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:07.028 12:01:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:07.028 12:01:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:07.028 12:01:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:07.028 12:01:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:07.028 12:01:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:07.028 12:01:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.028 12:01:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.028 12:01:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.028 12:01:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.028 12:01:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.028 12:01:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:07.028 12:01:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.028 12:01:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.028 12:01:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.028 12:01:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.028 "name": "raid_bdev1", 00:09:07.028 "uuid": "b9440ea3-1ba4-4e5a-a6f0-d17a4669a9b4", 00:09:07.028 "strip_size_kb": 64, 00:09:07.028 "state": "online", 00:09:07.028 "raid_level": "raid0", 00:09:07.028 "superblock": true, 00:09:07.028 "num_base_bdevs": 3, 00:09:07.028 "num_base_bdevs_discovered": 3, 00:09:07.028 "num_base_bdevs_operational": 3, 00:09:07.028 "base_bdevs_list": [ 00:09:07.028 { 00:09:07.028 "name": "BaseBdev1", 00:09:07.028 "uuid": "5156d9b7-df8c-5536-8da6-78c904ffd2b5", 00:09:07.028 "is_configured": true, 00:09:07.028 "data_offset": 2048, 00:09:07.028 "data_size": 63488 00:09:07.028 }, 00:09:07.028 { 00:09:07.028 "name": "BaseBdev2", 00:09:07.028 "uuid": "78481b36-43c3-529d-9757-2df3c4f6388e", 00:09:07.028 "is_configured": true, 00:09:07.028 "data_offset": 2048, 00:09:07.028 "data_size": 63488 00:09:07.028 }, 00:09:07.028 { 00:09:07.028 "name": "BaseBdev3", 00:09:07.028 "uuid": "c2eb9fc1-125b-5377-9307-e7a62e3102fd", 00:09:07.028 "is_configured": true, 00:09:07.028 "data_offset": 2048, 00:09:07.028 "data_size": 63488 00:09:07.028 } 00:09:07.028 ] 00:09:07.028 }' 00:09:07.028 12:01:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.028 12:01:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.598 12:01:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:07.598 12:01:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:07.598 [2024-11-19 12:01:10.834259] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:08.537 12:01:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:08.537 12:01:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.537 12:01:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.537 12:01:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.537 12:01:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:08.537 12:01:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:08.537 12:01:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:08.537 12:01:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:08.537 12:01:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:08.537 12:01:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:08.537 12:01:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:08.537 12:01:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:08.537 12:01:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:08.537 12:01:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.537 12:01:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.537 12:01:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.537 12:01:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.537 12:01:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.537 12:01:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.537 12:01:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.537 12:01:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:08.537 12:01:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.537 12:01:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.537 "name": "raid_bdev1", 00:09:08.537 "uuid": "b9440ea3-1ba4-4e5a-a6f0-d17a4669a9b4", 00:09:08.537 "strip_size_kb": 64, 00:09:08.537 "state": "online", 00:09:08.537 "raid_level": "raid0", 00:09:08.537 "superblock": true, 00:09:08.537 "num_base_bdevs": 3, 00:09:08.537 "num_base_bdevs_discovered": 3, 00:09:08.538 "num_base_bdevs_operational": 3, 00:09:08.538 "base_bdevs_list": [ 00:09:08.538 { 00:09:08.538 "name": "BaseBdev1", 00:09:08.538 "uuid": "5156d9b7-df8c-5536-8da6-78c904ffd2b5", 00:09:08.538 "is_configured": true, 00:09:08.538 "data_offset": 2048, 00:09:08.538 "data_size": 63488 00:09:08.538 }, 00:09:08.538 { 00:09:08.538 "name": "BaseBdev2", 00:09:08.538 "uuid": "78481b36-43c3-529d-9757-2df3c4f6388e", 00:09:08.538 "is_configured": true, 00:09:08.538 "data_offset": 2048, 00:09:08.538 "data_size": 63488 00:09:08.538 }, 00:09:08.538 { 00:09:08.538 "name": "BaseBdev3", 00:09:08.538 "uuid": "c2eb9fc1-125b-5377-9307-e7a62e3102fd", 00:09:08.538 "is_configured": true, 00:09:08.538 "data_offset": 2048, 00:09:08.538 "data_size": 63488 00:09:08.538 } 00:09:08.538 ] 00:09:08.538 }' 00:09:08.538 12:01:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.538 12:01:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.106 12:01:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:09.107 12:01:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.107 12:01:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.107 [2024-11-19 12:01:12.209062] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:09.107 [2024-11-19 12:01:12.209165] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:09.107 [2024-11-19 12:01:12.211738] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:09.107 [2024-11-19 12:01:12.211786] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:09.107 [2024-11-19 12:01:12.211823] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:09.107 [2024-11-19 12:01:12.211833] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:09.107 { 00:09:09.107 "results": [ 00:09:09.107 { 00:09:09.107 "job": "raid_bdev1", 00:09:09.107 "core_mask": "0x1", 00:09:09.107 "workload": "randrw", 00:09:09.107 "percentage": 50, 00:09:09.107 "status": "finished", 00:09:09.107 "queue_depth": 1, 00:09:09.107 "io_size": 131072, 00:09:09.107 "runtime": 1.375629, 00:09:09.107 "iops": 16577.870923046838, 00:09:09.107 "mibps": 2072.2338653808547, 00:09:09.107 "io_failed": 1, 00:09:09.107 "io_timeout": 0, 00:09:09.107 "avg_latency_us": 83.84381433369829, 00:09:09.107 "min_latency_us": 20.34585152838428, 00:09:09.107 "max_latency_us": 1345.0620087336245 00:09:09.107 } 00:09:09.107 ], 00:09:09.107 "core_count": 1 00:09:09.107 } 00:09:09.107 12:01:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.107 12:01:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65513 00:09:09.107 12:01:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 65513 ']' 00:09:09.107 12:01:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 65513 00:09:09.107 12:01:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:09.107 12:01:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:09.107 12:01:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65513 00:09:09.107 killing process with pid 65513 00:09:09.107 12:01:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:09.107 12:01:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:09.107 12:01:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65513' 00:09:09.107 12:01:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 65513 00:09:09.107 [2024-11-19 12:01:12.246899] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:09.107 12:01:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 65513 00:09:09.107 [2024-11-19 12:01:12.474275] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:10.487 12:01:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.cLnqnmXrnt 00:09:10.487 12:01:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:10.487 12:01:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:10.487 12:01:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:09:10.487 12:01:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:10.487 12:01:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:10.487 12:01:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:10.487 12:01:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:09:10.487 ************************************ 00:09:10.487 END TEST raid_write_error_test 00:09:10.487 ************************************ 00:09:10.487 00:09:10.487 real 0m4.494s 00:09:10.487 user 0m5.308s 00:09:10.487 sys 0m0.587s 00:09:10.487 12:01:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:10.487 12:01:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.487 12:01:13 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:10.487 12:01:13 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:09:10.487 12:01:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:10.487 12:01:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:10.487 12:01:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:10.487 ************************************ 00:09:10.487 START TEST raid_state_function_test 00:09:10.487 ************************************ 00:09:10.487 12:01:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:09:10.487 12:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:10.487 12:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:10.487 12:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:10.487 12:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:10.487 12:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:10.487 12:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:10.487 12:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:10.487 12:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:10.487 12:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:10.487 12:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:10.487 12:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:10.487 12:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:10.487 12:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:10.487 12:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:10.487 12:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:10.487 12:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:10.487 12:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:10.487 12:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:10.487 12:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:10.487 12:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:10.487 12:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:10.487 12:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:10.487 12:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:10.487 12:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:10.487 12:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:10.487 12:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:10.487 12:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65651 00:09:10.487 12:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:10.487 12:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65651' 00:09:10.487 Process raid pid: 65651 00:09:10.487 12:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65651 00:09:10.487 12:01:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 65651 ']' 00:09:10.487 12:01:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:10.487 12:01:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:10.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:10.487 12:01:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:10.487 12:01:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:10.487 12:01:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.487 [2024-11-19 12:01:13.792803] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:09:10.487 [2024-11-19 12:01:13.792918] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:10.748 [2024-11-19 12:01:13.970122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.748 [2024-11-19 12:01:14.085933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.007 [2024-11-19 12:01:14.284538] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:11.007 [2024-11-19 12:01:14.284583] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:11.266 12:01:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:11.266 12:01:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:11.266 12:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:11.266 12:01:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.266 12:01:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.266 [2024-11-19 12:01:14.635878] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:11.266 [2024-11-19 12:01:14.636014] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:11.266 [2024-11-19 12:01:14.636030] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:11.266 [2024-11-19 12:01:14.636041] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:11.266 [2024-11-19 12:01:14.636047] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:11.266 [2024-11-19 12:01:14.636056] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:11.525 12:01:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.525 12:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:11.525 12:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.525 12:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:11.525 12:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:11.526 12:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.526 12:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:11.526 12:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.526 12:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.526 12:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.526 12:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.526 12:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.526 12:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.526 12:01:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.526 12:01:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.526 12:01:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.526 12:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.526 "name": "Existed_Raid", 00:09:11.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:11.526 "strip_size_kb": 64, 00:09:11.526 "state": "configuring", 00:09:11.526 "raid_level": "concat", 00:09:11.526 "superblock": false, 00:09:11.526 "num_base_bdevs": 3, 00:09:11.526 "num_base_bdevs_discovered": 0, 00:09:11.526 "num_base_bdevs_operational": 3, 00:09:11.526 "base_bdevs_list": [ 00:09:11.526 { 00:09:11.526 "name": "BaseBdev1", 00:09:11.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:11.526 "is_configured": false, 00:09:11.526 "data_offset": 0, 00:09:11.526 "data_size": 0 00:09:11.526 }, 00:09:11.526 { 00:09:11.526 "name": "BaseBdev2", 00:09:11.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:11.526 "is_configured": false, 00:09:11.526 "data_offset": 0, 00:09:11.526 "data_size": 0 00:09:11.526 }, 00:09:11.526 { 00:09:11.526 "name": "BaseBdev3", 00:09:11.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:11.526 "is_configured": false, 00:09:11.526 "data_offset": 0, 00:09:11.526 "data_size": 0 00:09:11.526 } 00:09:11.526 ] 00:09:11.526 }' 00:09:11.526 12:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.526 12:01:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.784 12:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:11.784 12:01:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.784 12:01:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.784 [2024-11-19 12:01:15.027146] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:11.784 [2024-11-19 12:01:15.027227] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:11.784 12:01:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.784 12:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:11.784 12:01:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.785 12:01:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.785 [2024-11-19 12:01:15.039169] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:11.785 [2024-11-19 12:01:15.039248] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:11.785 [2024-11-19 12:01:15.039276] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:11.785 [2024-11-19 12:01:15.039299] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:11.785 [2024-11-19 12:01:15.039317] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:11.785 [2024-11-19 12:01:15.039338] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:11.785 12:01:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.785 12:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:11.785 12:01:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.785 12:01:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.785 [2024-11-19 12:01:15.085774] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:11.785 BaseBdev1 00:09:11.785 12:01:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.785 12:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:11.785 12:01:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:11.785 12:01:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:11.785 12:01:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:11.785 12:01:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:11.785 12:01:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:11.785 12:01:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:11.785 12:01:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.785 12:01:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.785 12:01:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.785 12:01:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:11.785 12:01:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.785 12:01:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.785 [ 00:09:11.785 { 00:09:11.785 "name": "BaseBdev1", 00:09:11.785 "aliases": [ 00:09:11.785 "7ab5a2f2-bd85-4492-9315-0f2863900527" 00:09:11.785 ], 00:09:11.785 "product_name": "Malloc disk", 00:09:11.785 "block_size": 512, 00:09:11.785 "num_blocks": 65536, 00:09:11.785 "uuid": "7ab5a2f2-bd85-4492-9315-0f2863900527", 00:09:11.785 "assigned_rate_limits": { 00:09:11.785 "rw_ios_per_sec": 0, 00:09:11.785 "rw_mbytes_per_sec": 0, 00:09:11.785 "r_mbytes_per_sec": 0, 00:09:11.785 "w_mbytes_per_sec": 0 00:09:11.785 }, 00:09:11.785 "claimed": true, 00:09:11.785 "claim_type": "exclusive_write", 00:09:11.785 "zoned": false, 00:09:11.785 "supported_io_types": { 00:09:11.785 "read": true, 00:09:11.785 "write": true, 00:09:11.785 "unmap": true, 00:09:11.785 "flush": true, 00:09:11.785 "reset": true, 00:09:11.785 "nvme_admin": false, 00:09:11.785 "nvme_io": false, 00:09:11.785 "nvme_io_md": false, 00:09:11.785 "write_zeroes": true, 00:09:11.785 "zcopy": true, 00:09:11.785 "get_zone_info": false, 00:09:11.785 "zone_management": false, 00:09:11.785 "zone_append": false, 00:09:11.785 "compare": false, 00:09:11.785 "compare_and_write": false, 00:09:11.785 "abort": true, 00:09:11.785 "seek_hole": false, 00:09:11.785 "seek_data": false, 00:09:11.785 "copy": true, 00:09:11.785 "nvme_iov_md": false 00:09:11.785 }, 00:09:11.785 "memory_domains": [ 00:09:11.785 { 00:09:11.785 "dma_device_id": "system", 00:09:11.785 "dma_device_type": 1 00:09:11.785 }, 00:09:11.785 { 00:09:11.785 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.785 "dma_device_type": 2 00:09:11.785 } 00:09:11.785 ], 00:09:11.785 "driver_specific": {} 00:09:11.785 } 00:09:11.785 ] 00:09:11.785 12:01:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.785 12:01:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:11.785 12:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:11.785 12:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.785 12:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:11.785 12:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:11.785 12:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.785 12:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:11.785 12:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.785 12:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.785 12:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.785 12:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.785 12:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.785 12:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.785 12:01:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.785 12:01:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.785 12:01:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.044 12:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.044 "name": "Existed_Raid", 00:09:12.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.044 "strip_size_kb": 64, 00:09:12.044 "state": "configuring", 00:09:12.044 "raid_level": "concat", 00:09:12.044 "superblock": false, 00:09:12.044 "num_base_bdevs": 3, 00:09:12.044 "num_base_bdevs_discovered": 1, 00:09:12.044 "num_base_bdevs_operational": 3, 00:09:12.044 "base_bdevs_list": [ 00:09:12.044 { 00:09:12.044 "name": "BaseBdev1", 00:09:12.044 "uuid": "7ab5a2f2-bd85-4492-9315-0f2863900527", 00:09:12.044 "is_configured": true, 00:09:12.044 "data_offset": 0, 00:09:12.044 "data_size": 65536 00:09:12.044 }, 00:09:12.044 { 00:09:12.044 "name": "BaseBdev2", 00:09:12.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.044 "is_configured": false, 00:09:12.044 "data_offset": 0, 00:09:12.044 "data_size": 0 00:09:12.044 }, 00:09:12.044 { 00:09:12.044 "name": "BaseBdev3", 00:09:12.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.044 "is_configured": false, 00:09:12.044 "data_offset": 0, 00:09:12.044 "data_size": 0 00:09:12.044 } 00:09:12.044 ] 00:09:12.044 }' 00:09:12.044 12:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.044 12:01:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.304 12:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:12.304 12:01:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.304 12:01:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.304 [2024-11-19 12:01:15.584982] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:12.304 [2024-11-19 12:01:15.585056] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:12.304 12:01:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.304 12:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:12.304 12:01:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.304 12:01:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.304 [2024-11-19 12:01:15.597038] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:12.304 [2024-11-19 12:01:15.598977] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:12.304 [2024-11-19 12:01:15.599098] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:12.304 [2024-11-19 12:01:15.599134] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:12.304 [2024-11-19 12:01:15.599161] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:12.304 12:01:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.304 12:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:12.304 12:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:12.304 12:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:12.304 12:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:12.304 12:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:12.304 12:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:12.304 12:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:12.304 12:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:12.304 12:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.304 12:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.304 12:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.304 12:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.304 12:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.304 12:01:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.304 12:01:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.304 12:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:12.304 12:01:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.304 12:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.304 "name": "Existed_Raid", 00:09:12.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.304 "strip_size_kb": 64, 00:09:12.304 "state": "configuring", 00:09:12.304 "raid_level": "concat", 00:09:12.304 "superblock": false, 00:09:12.304 "num_base_bdevs": 3, 00:09:12.304 "num_base_bdevs_discovered": 1, 00:09:12.304 "num_base_bdevs_operational": 3, 00:09:12.304 "base_bdevs_list": [ 00:09:12.304 { 00:09:12.304 "name": "BaseBdev1", 00:09:12.304 "uuid": "7ab5a2f2-bd85-4492-9315-0f2863900527", 00:09:12.304 "is_configured": true, 00:09:12.304 "data_offset": 0, 00:09:12.304 "data_size": 65536 00:09:12.304 }, 00:09:12.304 { 00:09:12.304 "name": "BaseBdev2", 00:09:12.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.304 "is_configured": false, 00:09:12.304 "data_offset": 0, 00:09:12.304 "data_size": 0 00:09:12.304 }, 00:09:12.304 { 00:09:12.304 "name": "BaseBdev3", 00:09:12.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.304 "is_configured": false, 00:09:12.304 "data_offset": 0, 00:09:12.304 "data_size": 0 00:09:12.304 } 00:09:12.304 ] 00:09:12.304 }' 00:09:12.304 12:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.304 12:01:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.873 12:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:12.873 12:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.873 12:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.873 [2024-11-19 12:01:16.057117] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:12.873 BaseBdev2 00:09:12.873 12:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.873 12:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:12.873 12:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:12.873 12:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:12.873 12:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:12.873 12:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:12.873 12:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:12.873 12:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:12.873 12:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.873 12:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.873 12:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.873 12:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:12.873 12:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.873 12:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.873 [ 00:09:12.873 { 00:09:12.873 "name": "BaseBdev2", 00:09:12.873 "aliases": [ 00:09:12.873 "c17bac64-5d42-4f70-bc10-9cf4f24afc07" 00:09:12.873 ], 00:09:12.873 "product_name": "Malloc disk", 00:09:12.873 "block_size": 512, 00:09:12.873 "num_blocks": 65536, 00:09:12.873 "uuid": "c17bac64-5d42-4f70-bc10-9cf4f24afc07", 00:09:12.873 "assigned_rate_limits": { 00:09:12.873 "rw_ios_per_sec": 0, 00:09:12.873 "rw_mbytes_per_sec": 0, 00:09:12.873 "r_mbytes_per_sec": 0, 00:09:12.873 "w_mbytes_per_sec": 0 00:09:12.873 }, 00:09:12.873 "claimed": true, 00:09:12.873 "claim_type": "exclusive_write", 00:09:12.873 "zoned": false, 00:09:12.873 "supported_io_types": { 00:09:12.873 "read": true, 00:09:12.873 "write": true, 00:09:12.873 "unmap": true, 00:09:12.873 "flush": true, 00:09:12.873 "reset": true, 00:09:12.873 "nvme_admin": false, 00:09:12.873 "nvme_io": false, 00:09:12.873 "nvme_io_md": false, 00:09:12.873 "write_zeroes": true, 00:09:12.873 "zcopy": true, 00:09:12.873 "get_zone_info": false, 00:09:12.873 "zone_management": false, 00:09:12.873 "zone_append": false, 00:09:12.873 "compare": false, 00:09:12.873 "compare_and_write": false, 00:09:12.873 "abort": true, 00:09:12.873 "seek_hole": false, 00:09:12.873 "seek_data": false, 00:09:12.873 "copy": true, 00:09:12.873 "nvme_iov_md": false 00:09:12.873 }, 00:09:12.873 "memory_domains": [ 00:09:12.873 { 00:09:12.873 "dma_device_id": "system", 00:09:12.873 "dma_device_type": 1 00:09:12.873 }, 00:09:12.873 { 00:09:12.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.873 "dma_device_type": 2 00:09:12.873 } 00:09:12.873 ], 00:09:12.873 "driver_specific": {} 00:09:12.873 } 00:09:12.873 ] 00:09:12.873 12:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.873 12:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:12.873 12:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:12.873 12:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:12.873 12:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:12.873 12:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:12.873 12:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:12.873 12:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:12.873 12:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:12.873 12:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:12.873 12:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.873 12:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.873 12:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.873 12:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.873 12:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.873 12:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.874 12:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:12.874 12:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.874 12:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.874 12:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.874 "name": "Existed_Raid", 00:09:12.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.874 "strip_size_kb": 64, 00:09:12.874 "state": "configuring", 00:09:12.874 "raid_level": "concat", 00:09:12.874 "superblock": false, 00:09:12.874 "num_base_bdevs": 3, 00:09:12.874 "num_base_bdevs_discovered": 2, 00:09:12.874 "num_base_bdevs_operational": 3, 00:09:12.874 "base_bdevs_list": [ 00:09:12.874 { 00:09:12.874 "name": "BaseBdev1", 00:09:12.874 "uuid": "7ab5a2f2-bd85-4492-9315-0f2863900527", 00:09:12.874 "is_configured": true, 00:09:12.874 "data_offset": 0, 00:09:12.874 "data_size": 65536 00:09:12.874 }, 00:09:12.874 { 00:09:12.874 "name": "BaseBdev2", 00:09:12.874 "uuid": "c17bac64-5d42-4f70-bc10-9cf4f24afc07", 00:09:12.874 "is_configured": true, 00:09:12.874 "data_offset": 0, 00:09:12.874 "data_size": 65536 00:09:12.874 }, 00:09:12.874 { 00:09:12.874 "name": "BaseBdev3", 00:09:12.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.874 "is_configured": false, 00:09:12.874 "data_offset": 0, 00:09:12.874 "data_size": 0 00:09:12.874 } 00:09:12.874 ] 00:09:12.874 }' 00:09:12.874 12:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.874 12:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.133 12:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:13.133 12:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.133 12:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.393 [2024-11-19 12:01:16.545776] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:13.393 [2024-11-19 12:01:16.545829] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:13.393 [2024-11-19 12:01:16.545841] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:13.393 [2024-11-19 12:01:16.546122] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:13.393 [2024-11-19 12:01:16.546292] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:13.393 [2024-11-19 12:01:16.546302] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:13.393 [2024-11-19 12:01:16.546729] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:13.393 BaseBdev3 00:09:13.393 12:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.393 12:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:13.393 12:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:13.393 12:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:13.393 12:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:13.393 12:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:13.393 12:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:13.393 12:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:13.393 12:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.393 12:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.393 12:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.393 12:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:13.393 12:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.393 12:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.393 [ 00:09:13.393 { 00:09:13.393 "name": "BaseBdev3", 00:09:13.393 "aliases": [ 00:09:13.393 "77592b50-c885-4f99-bb5c-23b6e3202afc" 00:09:13.393 ], 00:09:13.393 "product_name": "Malloc disk", 00:09:13.393 "block_size": 512, 00:09:13.393 "num_blocks": 65536, 00:09:13.393 "uuid": "77592b50-c885-4f99-bb5c-23b6e3202afc", 00:09:13.393 "assigned_rate_limits": { 00:09:13.393 "rw_ios_per_sec": 0, 00:09:13.393 "rw_mbytes_per_sec": 0, 00:09:13.393 "r_mbytes_per_sec": 0, 00:09:13.393 "w_mbytes_per_sec": 0 00:09:13.393 }, 00:09:13.393 "claimed": true, 00:09:13.393 "claim_type": "exclusive_write", 00:09:13.393 "zoned": false, 00:09:13.393 "supported_io_types": { 00:09:13.393 "read": true, 00:09:13.393 "write": true, 00:09:13.393 "unmap": true, 00:09:13.393 "flush": true, 00:09:13.393 "reset": true, 00:09:13.393 "nvme_admin": false, 00:09:13.393 "nvme_io": false, 00:09:13.393 "nvme_io_md": false, 00:09:13.393 "write_zeroes": true, 00:09:13.393 "zcopy": true, 00:09:13.393 "get_zone_info": false, 00:09:13.393 "zone_management": false, 00:09:13.393 "zone_append": false, 00:09:13.393 "compare": false, 00:09:13.393 "compare_and_write": false, 00:09:13.393 "abort": true, 00:09:13.393 "seek_hole": false, 00:09:13.393 "seek_data": false, 00:09:13.393 "copy": true, 00:09:13.393 "nvme_iov_md": false 00:09:13.393 }, 00:09:13.393 "memory_domains": [ 00:09:13.393 { 00:09:13.393 "dma_device_id": "system", 00:09:13.393 "dma_device_type": 1 00:09:13.393 }, 00:09:13.393 { 00:09:13.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.393 "dma_device_type": 2 00:09:13.393 } 00:09:13.393 ], 00:09:13.393 "driver_specific": {} 00:09:13.393 } 00:09:13.393 ] 00:09:13.393 12:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.393 12:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:13.393 12:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:13.393 12:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:13.393 12:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:13.393 12:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:13.393 12:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:13.393 12:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:13.393 12:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:13.393 12:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:13.393 12:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.393 12:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.393 12:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.393 12:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.393 12:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.393 12:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.393 12:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.393 12:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.393 12:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.393 12:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.393 "name": "Existed_Raid", 00:09:13.393 "uuid": "e03aed48-0bdd-4cf6-9272-1478e8628e07", 00:09:13.393 "strip_size_kb": 64, 00:09:13.393 "state": "online", 00:09:13.393 "raid_level": "concat", 00:09:13.393 "superblock": false, 00:09:13.393 "num_base_bdevs": 3, 00:09:13.394 "num_base_bdevs_discovered": 3, 00:09:13.394 "num_base_bdevs_operational": 3, 00:09:13.394 "base_bdevs_list": [ 00:09:13.394 { 00:09:13.394 "name": "BaseBdev1", 00:09:13.394 "uuid": "7ab5a2f2-bd85-4492-9315-0f2863900527", 00:09:13.394 "is_configured": true, 00:09:13.394 "data_offset": 0, 00:09:13.394 "data_size": 65536 00:09:13.394 }, 00:09:13.394 { 00:09:13.394 "name": "BaseBdev2", 00:09:13.394 "uuid": "c17bac64-5d42-4f70-bc10-9cf4f24afc07", 00:09:13.394 "is_configured": true, 00:09:13.394 "data_offset": 0, 00:09:13.394 "data_size": 65536 00:09:13.394 }, 00:09:13.394 { 00:09:13.394 "name": "BaseBdev3", 00:09:13.394 "uuid": "77592b50-c885-4f99-bb5c-23b6e3202afc", 00:09:13.394 "is_configured": true, 00:09:13.394 "data_offset": 0, 00:09:13.394 "data_size": 65536 00:09:13.394 } 00:09:13.394 ] 00:09:13.394 }' 00:09:13.394 12:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.394 12:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.653 12:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:13.653 12:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:13.653 12:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:13.653 12:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:13.653 12:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:13.653 12:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:13.653 12:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:13.653 12:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:13.653 12:01:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.653 12:01:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.653 [2024-11-19 12:01:17.013360] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:13.913 12:01:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.913 12:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:13.913 "name": "Existed_Raid", 00:09:13.913 "aliases": [ 00:09:13.913 "e03aed48-0bdd-4cf6-9272-1478e8628e07" 00:09:13.913 ], 00:09:13.913 "product_name": "Raid Volume", 00:09:13.913 "block_size": 512, 00:09:13.913 "num_blocks": 196608, 00:09:13.913 "uuid": "e03aed48-0bdd-4cf6-9272-1478e8628e07", 00:09:13.913 "assigned_rate_limits": { 00:09:13.913 "rw_ios_per_sec": 0, 00:09:13.913 "rw_mbytes_per_sec": 0, 00:09:13.913 "r_mbytes_per_sec": 0, 00:09:13.913 "w_mbytes_per_sec": 0 00:09:13.913 }, 00:09:13.913 "claimed": false, 00:09:13.913 "zoned": false, 00:09:13.913 "supported_io_types": { 00:09:13.913 "read": true, 00:09:13.913 "write": true, 00:09:13.913 "unmap": true, 00:09:13.913 "flush": true, 00:09:13.913 "reset": true, 00:09:13.913 "nvme_admin": false, 00:09:13.913 "nvme_io": false, 00:09:13.913 "nvme_io_md": false, 00:09:13.913 "write_zeroes": true, 00:09:13.913 "zcopy": false, 00:09:13.913 "get_zone_info": false, 00:09:13.913 "zone_management": false, 00:09:13.913 "zone_append": false, 00:09:13.913 "compare": false, 00:09:13.913 "compare_and_write": false, 00:09:13.913 "abort": false, 00:09:13.913 "seek_hole": false, 00:09:13.913 "seek_data": false, 00:09:13.913 "copy": false, 00:09:13.913 "nvme_iov_md": false 00:09:13.913 }, 00:09:13.913 "memory_domains": [ 00:09:13.913 { 00:09:13.913 "dma_device_id": "system", 00:09:13.913 "dma_device_type": 1 00:09:13.913 }, 00:09:13.913 { 00:09:13.913 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.913 "dma_device_type": 2 00:09:13.913 }, 00:09:13.913 { 00:09:13.913 "dma_device_id": "system", 00:09:13.913 "dma_device_type": 1 00:09:13.913 }, 00:09:13.913 { 00:09:13.913 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.913 "dma_device_type": 2 00:09:13.913 }, 00:09:13.913 { 00:09:13.913 "dma_device_id": "system", 00:09:13.913 "dma_device_type": 1 00:09:13.913 }, 00:09:13.913 { 00:09:13.913 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.913 "dma_device_type": 2 00:09:13.913 } 00:09:13.913 ], 00:09:13.913 "driver_specific": { 00:09:13.913 "raid": { 00:09:13.913 "uuid": "e03aed48-0bdd-4cf6-9272-1478e8628e07", 00:09:13.913 "strip_size_kb": 64, 00:09:13.913 "state": "online", 00:09:13.913 "raid_level": "concat", 00:09:13.913 "superblock": false, 00:09:13.913 "num_base_bdevs": 3, 00:09:13.913 "num_base_bdevs_discovered": 3, 00:09:13.913 "num_base_bdevs_operational": 3, 00:09:13.913 "base_bdevs_list": [ 00:09:13.913 { 00:09:13.913 "name": "BaseBdev1", 00:09:13.913 "uuid": "7ab5a2f2-bd85-4492-9315-0f2863900527", 00:09:13.913 "is_configured": true, 00:09:13.913 "data_offset": 0, 00:09:13.913 "data_size": 65536 00:09:13.913 }, 00:09:13.913 { 00:09:13.913 "name": "BaseBdev2", 00:09:13.913 "uuid": "c17bac64-5d42-4f70-bc10-9cf4f24afc07", 00:09:13.913 "is_configured": true, 00:09:13.913 "data_offset": 0, 00:09:13.913 "data_size": 65536 00:09:13.913 }, 00:09:13.913 { 00:09:13.913 "name": "BaseBdev3", 00:09:13.913 "uuid": "77592b50-c885-4f99-bb5c-23b6e3202afc", 00:09:13.913 "is_configured": true, 00:09:13.913 "data_offset": 0, 00:09:13.913 "data_size": 65536 00:09:13.913 } 00:09:13.913 ] 00:09:13.913 } 00:09:13.913 } 00:09:13.913 }' 00:09:13.913 12:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:13.913 12:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:13.913 BaseBdev2 00:09:13.913 BaseBdev3' 00:09:13.913 12:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:13.913 12:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:13.913 12:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:13.913 12:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:13.913 12:01:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.913 12:01:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.913 12:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:13.913 12:01:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.913 12:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:13.913 12:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:13.914 12:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:13.914 12:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:13.914 12:01:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.914 12:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:13.914 12:01:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.914 12:01:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.914 12:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:13.914 12:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:13.914 12:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:13.914 12:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:13.914 12:01:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.914 12:01:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.914 12:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:13.914 12:01:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.914 12:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:13.914 12:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:13.914 12:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:13.914 12:01:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.914 12:01:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.914 [2024-11-19 12:01:17.268664] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:13.914 [2024-11-19 12:01:17.268697] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:13.914 [2024-11-19 12:01:17.268749] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:14.179 12:01:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.179 12:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:14.180 12:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:14.180 12:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:14.180 12:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:14.180 12:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:14.180 12:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:14.180 12:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.180 12:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:14.180 12:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:14.180 12:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.180 12:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:14.180 12:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.180 12:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.180 12:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.180 12:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.180 12:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.180 12:01:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.180 12:01:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.180 12:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.180 12:01:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.180 12:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.180 "name": "Existed_Raid", 00:09:14.180 "uuid": "e03aed48-0bdd-4cf6-9272-1478e8628e07", 00:09:14.181 "strip_size_kb": 64, 00:09:14.181 "state": "offline", 00:09:14.181 "raid_level": "concat", 00:09:14.181 "superblock": false, 00:09:14.181 "num_base_bdevs": 3, 00:09:14.181 "num_base_bdevs_discovered": 2, 00:09:14.181 "num_base_bdevs_operational": 2, 00:09:14.181 "base_bdevs_list": [ 00:09:14.181 { 00:09:14.181 "name": null, 00:09:14.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.181 "is_configured": false, 00:09:14.181 "data_offset": 0, 00:09:14.181 "data_size": 65536 00:09:14.181 }, 00:09:14.181 { 00:09:14.181 "name": "BaseBdev2", 00:09:14.181 "uuid": "c17bac64-5d42-4f70-bc10-9cf4f24afc07", 00:09:14.181 "is_configured": true, 00:09:14.181 "data_offset": 0, 00:09:14.181 "data_size": 65536 00:09:14.181 }, 00:09:14.181 { 00:09:14.181 "name": "BaseBdev3", 00:09:14.181 "uuid": "77592b50-c885-4f99-bb5c-23b6e3202afc", 00:09:14.181 "is_configured": true, 00:09:14.181 "data_offset": 0, 00:09:14.181 "data_size": 65536 00:09:14.181 } 00:09:14.181 ] 00:09:14.181 }' 00:09:14.181 12:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.181 12:01:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.448 12:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:14.448 12:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:14.448 12:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.448 12:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:14.448 12:01:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.448 12:01:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.709 12:01:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.709 12:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:14.709 12:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:14.709 12:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:14.709 12:01:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.709 12:01:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.709 [2024-11-19 12:01:17.857224] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:14.709 12:01:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.709 12:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:14.709 12:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:14.709 12:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.709 12:01:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.709 12:01:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.709 12:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:14.709 12:01:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.709 12:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:14.709 12:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:14.709 12:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:14.709 12:01:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.709 12:01:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.710 [2024-11-19 12:01:17.994414] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:14.710 [2024-11-19 12:01:17.994467] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:14.970 12:01:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.970 12:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:14.970 12:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:14.970 12:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.970 12:01:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.970 12:01:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.970 12:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:14.970 12:01:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.970 12:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:14.970 12:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:14.970 12:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:14.970 12:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:14.970 12:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:14.970 12:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:14.970 12:01:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.970 12:01:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.970 BaseBdev2 00:09:14.970 12:01:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.970 12:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:14.970 12:01:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:14.970 12:01:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:14.970 12:01:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:14.970 12:01:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:14.970 12:01:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:14.970 12:01:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:14.970 12:01:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.970 12:01:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.970 12:01:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.970 12:01:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:14.970 12:01:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.970 12:01:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.970 [ 00:09:14.970 { 00:09:14.970 "name": "BaseBdev2", 00:09:14.970 "aliases": [ 00:09:14.970 "f5291363-f8e9-4cbe-9cc7-218e80f64258" 00:09:14.970 ], 00:09:14.970 "product_name": "Malloc disk", 00:09:14.970 "block_size": 512, 00:09:14.970 "num_blocks": 65536, 00:09:14.970 "uuid": "f5291363-f8e9-4cbe-9cc7-218e80f64258", 00:09:14.970 "assigned_rate_limits": { 00:09:14.970 "rw_ios_per_sec": 0, 00:09:14.971 "rw_mbytes_per_sec": 0, 00:09:14.971 "r_mbytes_per_sec": 0, 00:09:14.971 "w_mbytes_per_sec": 0 00:09:14.971 }, 00:09:14.971 "claimed": false, 00:09:14.971 "zoned": false, 00:09:14.971 "supported_io_types": { 00:09:14.971 "read": true, 00:09:14.971 "write": true, 00:09:14.971 "unmap": true, 00:09:14.971 "flush": true, 00:09:14.971 "reset": true, 00:09:14.971 "nvme_admin": false, 00:09:14.971 "nvme_io": false, 00:09:14.971 "nvme_io_md": false, 00:09:14.971 "write_zeroes": true, 00:09:14.971 "zcopy": true, 00:09:14.971 "get_zone_info": false, 00:09:14.971 "zone_management": false, 00:09:14.971 "zone_append": false, 00:09:14.971 "compare": false, 00:09:14.971 "compare_and_write": false, 00:09:14.971 "abort": true, 00:09:14.971 "seek_hole": false, 00:09:14.971 "seek_data": false, 00:09:14.971 "copy": true, 00:09:14.971 "nvme_iov_md": false 00:09:14.971 }, 00:09:14.971 "memory_domains": [ 00:09:14.971 { 00:09:14.971 "dma_device_id": "system", 00:09:14.971 "dma_device_type": 1 00:09:14.971 }, 00:09:14.971 { 00:09:14.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.971 "dma_device_type": 2 00:09:14.971 } 00:09:14.971 ], 00:09:14.971 "driver_specific": {} 00:09:14.971 } 00:09:14.971 ] 00:09:14.971 12:01:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.971 12:01:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:14.971 12:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:14.971 12:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:14.971 12:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:14.971 12:01:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.971 12:01:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.971 BaseBdev3 00:09:14.971 12:01:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.971 12:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:14.971 12:01:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:14.971 12:01:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:14.971 12:01:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:14.971 12:01:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:14.971 12:01:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:14.971 12:01:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:14.971 12:01:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.971 12:01:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.971 12:01:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.971 12:01:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:14.971 12:01:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.971 12:01:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.971 [ 00:09:14.971 { 00:09:14.971 "name": "BaseBdev3", 00:09:14.971 "aliases": [ 00:09:14.971 "c2ca9a58-e209-45e3-aa87-8ada1bd8c3a8" 00:09:14.971 ], 00:09:14.971 "product_name": "Malloc disk", 00:09:14.971 "block_size": 512, 00:09:14.971 "num_blocks": 65536, 00:09:14.971 "uuid": "c2ca9a58-e209-45e3-aa87-8ada1bd8c3a8", 00:09:14.971 "assigned_rate_limits": { 00:09:14.971 "rw_ios_per_sec": 0, 00:09:14.971 "rw_mbytes_per_sec": 0, 00:09:14.971 "r_mbytes_per_sec": 0, 00:09:14.971 "w_mbytes_per_sec": 0 00:09:14.971 }, 00:09:14.971 "claimed": false, 00:09:14.971 "zoned": false, 00:09:14.971 "supported_io_types": { 00:09:14.971 "read": true, 00:09:14.971 "write": true, 00:09:14.971 "unmap": true, 00:09:14.971 "flush": true, 00:09:14.971 "reset": true, 00:09:14.971 "nvme_admin": false, 00:09:14.971 "nvme_io": false, 00:09:14.971 "nvme_io_md": false, 00:09:14.971 "write_zeroes": true, 00:09:14.971 "zcopy": true, 00:09:14.971 "get_zone_info": false, 00:09:14.971 "zone_management": false, 00:09:14.971 "zone_append": false, 00:09:14.971 "compare": false, 00:09:14.971 "compare_and_write": false, 00:09:14.971 "abort": true, 00:09:14.971 "seek_hole": false, 00:09:14.971 "seek_data": false, 00:09:14.971 "copy": true, 00:09:14.971 "nvme_iov_md": false 00:09:14.971 }, 00:09:14.971 "memory_domains": [ 00:09:14.971 { 00:09:14.971 "dma_device_id": "system", 00:09:14.971 "dma_device_type": 1 00:09:14.971 }, 00:09:14.971 { 00:09:14.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.971 "dma_device_type": 2 00:09:14.971 } 00:09:14.971 ], 00:09:14.971 "driver_specific": {} 00:09:14.971 } 00:09:14.971 ] 00:09:14.971 12:01:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.971 12:01:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:14.971 12:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:14.971 12:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:14.971 12:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:14.971 12:01:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.971 12:01:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.971 [2024-11-19 12:01:18.298454] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:14.971 [2024-11-19 12:01:18.298507] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:14.971 [2024-11-19 12:01:18.298533] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:14.971 [2024-11-19 12:01:18.300547] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:14.971 12:01:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.971 12:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:14.971 12:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.971 12:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.971 12:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:14.971 12:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.971 12:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.971 12:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.971 12:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.971 12:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.971 12:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.971 12:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.971 12:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.971 12:01:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.971 12:01:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.971 12:01:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.231 12:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.231 "name": "Existed_Raid", 00:09:15.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.231 "strip_size_kb": 64, 00:09:15.231 "state": "configuring", 00:09:15.231 "raid_level": "concat", 00:09:15.231 "superblock": false, 00:09:15.231 "num_base_bdevs": 3, 00:09:15.231 "num_base_bdevs_discovered": 2, 00:09:15.231 "num_base_bdevs_operational": 3, 00:09:15.231 "base_bdevs_list": [ 00:09:15.231 { 00:09:15.231 "name": "BaseBdev1", 00:09:15.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.231 "is_configured": false, 00:09:15.231 "data_offset": 0, 00:09:15.231 "data_size": 0 00:09:15.231 }, 00:09:15.231 { 00:09:15.231 "name": "BaseBdev2", 00:09:15.231 "uuid": "f5291363-f8e9-4cbe-9cc7-218e80f64258", 00:09:15.231 "is_configured": true, 00:09:15.231 "data_offset": 0, 00:09:15.231 "data_size": 65536 00:09:15.231 }, 00:09:15.231 { 00:09:15.231 "name": "BaseBdev3", 00:09:15.231 "uuid": "c2ca9a58-e209-45e3-aa87-8ada1bd8c3a8", 00:09:15.231 "is_configured": true, 00:09:15.231 "data_offset": 0, 00:09:15.231 "data_size": 65536 00:09:15.231 } 00:09:15.231 ] 00:09:15.231 }' 00:09:15.231 12:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.231 12:01:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.492 12:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:15.492 12:01:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.492 12:01:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.492 [2024-11-19 12:01:18.745677] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:15.492 12:01:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.492 12:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:15.492 12:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.492 12:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:15.492 12:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:15.492 12:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.492 12:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:15.492 12:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.492 12:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.492 12:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.492 12:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.492 12:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.492 12:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.492 12:01:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.492 12:01:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.492 12:01:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.492 12:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.492 "name": "Existed_Raid", 00:09:15.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.492 "strip_size_kb": 64, 00:09:15.492 "state": "configuring", 00:09:15.492 "raid_level": "concat", 00:09:15.492 "superblock": false, 00:09:15.492 "num_base_bdevs": 3, 00:09:15.492 "num_base_bdevs_discovered": 1, 00:09:15.492 "num_base_bdevs_operational": 3, 00:09:15.492 "base_bdevs_list": [ 00:09:15.492 { 00:09:15.492 "name": "BaseBdev1", 00:09:15.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.492 "is_configured": false, 00:09:15.492 "data_offset": 0, 00:09:15.492 "data_size": 0 00:09:15.492 }, 00:09:15.492 { 00:09:15.492 "name": null, 00:09:15.492 "uuid": "f5291363-f8e9-4cbe-9cc7-218e80f64258", 00:09:15.492 "is_configured": false, 00:09:15.492 "data_offset": 0, 00:09:15.492 "data_size": 65536 00:09:15.492 }, 00:09:15.492 { 00:09:15.492 "name": "BaseBdev3", 00:09:15.492 "uuid": "c2ca9a58-e209-45e3-aa87-8ada1bd8c3a8", 00:09:15.492 "is_configured": true, 00:09:15.492 "data_offset": 0, 00:09:15.492 "data_size": 65536 00:09:15.492 } 00:09:15.492 ] 00:09:15.492 }' 00:09:15.492 12:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.492 12:01:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.072 12:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:16.072 12:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.072 12:01:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.072 12:01:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.072 12:01:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.072 12:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:16.072 12:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:16.072 12:01:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.072 12:01:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.072 [2024-11-19 12:01:19.259362] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:16.072 BaseBdev1 00:09:16.072 12:01:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.072 12:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:16.072 12:01:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:16.072 12:01:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:16.072 12:01:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:16.072 12:01:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:16.072 12:01:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:16.072 12:01:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:16.072 12:01:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.072 12:01:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.073 12:01:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.073 12:01:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:16.073 12:01:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.073 12:01:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.073 [ 00:09:16.073 { 00:09:16.073 "name": "BaseBdev1", 00:09:16.073 "aliases": [ 00:09:16.073 "bdd31ea3-a4d4-48b3-ac46-1933c1a3a3af" 00:09:16.073 ], 00:09:16.073 "product_name": "Malloc disk", 00:09:16.073 "block_size": 512, 00:09:16.073 "num_blocks": 65536, 00:09:16.073 "uuid": "bdd31ea3-a4d4-48b3-ac46-1933c1a3a3af", 00:09:16.073 "assigned_rate_limits": { 00:09:16.073 "rw_ios_per_sec": 0, 00:09:16.073 "rw_mbytes_per_sec": 0, 00:09:16.073 "r_mbytes_per_sec": 0, 00:09:16.073 "w_mbytes_per_sec": 0 00:09:16.073 }, 00:09:16.073 "claimed": true, 00:09:16.073 "claim_type": "exclusive_write", 00:09:16.073 "zoned": false, 00:09:16.073 "supported_io_types": { 00:09:16.073 "read": true, 00:09:16.073 "write": true, 00:09:16.073 "unmap": true, 00:09:16.073 "flush": true, 00:09:16.073 "reset": true, 00:09:16.073 "nvme_admin": false, 00:09:16.073 "nvme_io": false, 00:09:16.073 "nvme_io_md": false, 00:09:16.073 "write_zeroes": true, 00:09:16.073 "zcopy": true, 00:09:16.073 "get_zone_info": false, 00:09:16.073 "zone_management": false, 00:09:16.073 "zone_append": false, 00:09:16.073 "compare": false, 00:09:16.073 "compare_and_write": false, 00:09:16.073 "abort": true, 00:09:16.073 "seek_hole": false, 00:09:16.073 "seek_data": false, 00:09:16.073 "copy": true, 00:09:16.073 "nvme_iov_md": false 00:09:16.073 }, 00:09:16.073 "memory_domains": [ 00:09:16.073 { 00:09:16.073 "dma_device_id": "system", 00:09:16.073 "dma_device_type": 1 00:09:16.073 }, 00:09:16.073 { 00:09:16.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.073 "dma_device_type": 2 00:09:16.073 } 00:09:16.073 ], 00:09:16.073 "driver_specific": {} 00:09:16.073 } 00:09:16.073 ] 00:09:16.073 12:01:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.073 12:01:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:16.073 12:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:16.073 12:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:16.073 12:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:16.073 12:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:16.073 12:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:16.073 12:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:16.073 12:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.073 12:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.073 12:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.073 12:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.073 12:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.073 12:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:16.073 12:01:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.073 12:01:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.073 12:01:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.073 12:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.073 "name": "Existed_Raid", 00:09:16.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.073 "strip_size_kb": 64, 00:09:16.073 "state": "configuring", 00:09:16.073 "raid_level": "concat", 00:09:16.073 "superblock": false, 00:09:16.073 "num_base_bdevs": 3, 00:09:16.073 "num_base_bdevs_discovered": 2, 00:09:16.073 "num_base_bdevs_operational": 3, 00:09:16.073 "base_bdevs_list": [ 00:09:16.073 { 00:09:16.073 "name": "BaseBdev1", 00:09:16.073 "uuid": "bdd31ea3-a4d4-48b3-ac46-1933c1a3a3af", 00:09:16.073 "is_configured": true, 00:09:16.073 "data_offset": 0, 00:09:16.073 "data_size": 65536 00:09:16.073 }, 00:09:16.073 { 00:09:16.073 "name": null, 00:09:16.073 "uuid": "f5291363-f8e9-4cbe-9cc7-218e80f64258", 00:09:16.073 "is_configured": false, 00:09:16.073 "data_offset": 0, 00:09:16.073 "data_size": 65536 00:09:16.073 }, 00:09:16.073 { 00:09:16.073 "name": "BaseBdev3", 00:09:16.073 "uuid": "c2ca9a58-e209-45e3-aa87-8ada1bd8c3a8", 00:09:16.073 "is_configured": true, 00:09:16.073 "data_offset": 0, 00:09:16.073 "data_size": 65536 00:09:16.073 } 00:09:16.073 ] 00:09:16.073 }' 00:09:16.073 12:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.073 12:01:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.643 12:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:16.643 12:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.643 12:01:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.643 12:01:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.643 12:01:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.643 12:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:16.643 12:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:16.643 12:01:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.643 12:01:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.643 [2024-11-19 12:01:19.794560] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:16.643 12:01:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.643 12:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:16.643 12:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:16.643 12:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:16.643 12:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:16.643 12:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:16.643 12:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:16.643 12:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.643 12:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.643 12:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.643 12:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.643 12:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.643 12:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:16.643 12:01:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.643 12:01:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.643 12:01:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.643 12:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.643 "name": "Existed_Raid", 00:09:16.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.643 "strip_size_kb": 64, 00:09:16.643 "state": "configuring", 00:09:16.643 "raid_level": "concat", 00:09:16.643 "superblock": false, 00:09:16.643 "num_base_bdevs": 3, 00:09:16.643 "num_base_bdevs_discovered": 1, 00:09:16.643 "num_base_bdevs_operational": 3, 00:09:16.643 "base_bdevs_list": [ 00:09:16.643 { 00:09:16.643 "name": "BaseBdev1", 00:09:16.643 "uuid": "bdd31ea3-a4d4-48b3-ac46-1933c1a3a3af", 00:09:16.643 "is_configured": true, 00:09:16.643 "data_offset": 0, 00:09:16.643 "data_size": 65536 00:09:16.643 }, 00:09:16.643 { 00:09:16.643 "name": null, 00:09:16.643 "uuid": "f5291363-f8e9-4cbe-9cc7-218e80f64258", 00:09:16.643 "is_configured": false, 00:09:16.643 "data_offset": 0, 00:09:16.643 "data_size": 65536 00:09:16.643 }, 00:09:16.643 { 00:09:16.643 "name": null, 00:09:16.643 "uuid": "c2ca9a58-e209-45e3-aa87-8ada1bd8c3a8", 00:09:16.643 "is_configured": false, 00:09:16.643 "data_offset": 0, 00:09:16.643 "data_size": 65536 00:09:16.643 } 00:09:16.643 ] 00:09:16.643 }' 00:09:16.643 12:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.643 12:01:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.903 12:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.903 12:01:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.903 12:01:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.903 12:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:16.903 12:01:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.903 12:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:16.903 12:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:16.903 12:01:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.903 12:01:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.165 [2024-11-19 12:01:20.281817] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:17.165 12:01:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.165 12:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:17.165 12:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:17.165 12:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:17.165 12:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:17.165 12:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:17.165 12:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:17.165 12:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.165 12:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.165 12:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.165 12:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.165 12:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:17.165 12:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.165 12:01:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.165 12:01:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.165 12:01:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.165 12:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.165 "name": "Existed_Raid", 00:09:17.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.165 "strip_size_kb": 64, 00:09:17.165 "state": "configuring", 00:09:17.165 "raid_level": "concat", 00:09:17.165 "superblock": false, 00:09:17.165 "num_base_bdevs": 3, 00:09:17.165 "num_base_bdevs_discovered": 2, 00:09:17.165 "num_base_bdevs_operational": 3, 00:09:17.165 "base_bdevs_list": [ 00:09:17.165 { 00:09:17.165 "name": "BaseBdev1", 00:09:17.165 "uuid": "bdd31ea3-a4d4-48b3-ac46-1933c1a3a3af", 00:09:17.165 "is_configured": true, 00:09:17.165 "data_offset": 0, 00:09:17.165 "data_size": 65536 00:09:17.165 }, 00:09:17.165 { 00:09:17.165 "name": null, 00:09:17.165 "uuid": "f5291363-f8e9-4cbe-9cc7-218e80f64258", 00:09:17.165 "is_configured": false, 00:09:17.165 "data_offset": 0, 00:09:17.165 "data_size": 65536 00:09:17.165 }, 00:09:17.165 { 00:09:17.165 "name": "BaseBdev3", 00:09:17.165 "uuid": "c2ca9a58-e209-45e3-aa87-8ada1bd8c3a8", 00:09:17.165 "is_configured": true, 00:09:17.165 "data_offset": 0, 00:09:17.165 "data_size": 65536 00:09:17.165 } 00:09:17.165 ] 00:09:17.165 }' 00:09:17.165 12:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.165 12:01:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.424 12:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:17.424 12:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.424 12:01:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.424 12:01:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.424 12:01:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.424 12:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:17.424 12:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:17.424 12:01:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.424 12:01:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.424 [2024-11-19 12:01:20.756974] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:17.684 12:01:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.684 12:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:17.684 12:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:17.684 12:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:17.684 12:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:17.684 12:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:17.684 12:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:17.684 12:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.684 12:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.684 12:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.684 12:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.684 12:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.684 12:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:17.684 12:01:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.684 12:01:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.684 12:01:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.684 12:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.684 "name": "Existed_Raid", 00:09:17.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.684 "strip_size_kb": 64, 00:09:17.684 "state": "configuring", 00:09:17.684 "raid_level": "concat", 00:09:17.684 "superblock": false, 00:09:17.684 "num_base_bdevs": 3, 00:09:17.684 "num_base_bdevs_discovered": 1, 00:09:17.684 "num_base_bdevs_operational": 3, 00:09:17.684 "base_bdevs_list": [ 00:09:17.684 { 00:09:17.684 "name": null, 00:09:17.684 "uuid": "bdd31ea3-a4d4-48b3-ac46-1933c1a3a3af", 00:09:17.684 "is_configured": false, 00:09:17.684 "data_offset": 0, 00:09:17.684 "data_size": 65536 00:09:17.684 }, 00:09:17.684 { 00:09:17.684 "name": null, 00:09:17.684 "uuid": "f5291363-f8e9-4cbe-9cc7-218e80f64258", 00:09:17.684 "is_configured": false, 00:09:17.684 "data_offset": 0, 00:09:17.684 "data_size": 65536 00:09:17.684 }, 00:09:17.684 { 00:09:17.684 "name": "BaseBdev3", 00:09:17.684 "uuid": "c2ca9a58-e209-45e3-aa87-8ada1bd8c3a8", 00:09:17.684 "is_configured": true, 00:09:17.684 "data_offset": 0, 00:09:17.684 "data_size": 65536 00:09:17.684 } 00:09:17.684 ] 00:09:17.684 }' 00:09:17.684 12:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.684 12:01:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.944 12:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.944 12:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.944 12:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.944 12:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:17.944 12:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.944 12:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:17.944 12:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:17.944 12:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.944 12:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.944 [2024-11-19 12:01:21.311212] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:17.944 12:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.944 12:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:17.944 12:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:17.944 12:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:17.944 12:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:18.204 12:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.204 12:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:18.204 12:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.204 12:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.204 12:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.204 12:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.204 12:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.204 12:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:18.204 12:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.204 12:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.204 12:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.204 12:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.204 "name": "Existed_Raid", 00:09:18.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.204 "strip_size_kb": 64, 00:09:18.204 "state": "configuring", 00:09:18.204 "raid_level": "concat", 00:09:18.204 "superblock": false, 00:09:18.204 "num_base_bdevs": 3, 00:09:18.204 "num_base_bdevs_discovered": 2, 00:09:18.204 "num_base_bdevs_operational": 3, 00:09:18.204 "base_bdevs_list": [ 00:09:18.204 { 00:09:18.204 "name": null, 00:09:18.204 "uuid": "bdd31ea3-a4d4-48b3-ac46-1933c1a3a3af", 00:09:18.204 "is_configured": false, 00:09:18.204 "data_offset": 0, 00:09:18.204 "data_size": 65536 00:09:18.204 }, 00:09:18.204 { 00:09:18.204 "name": "BaseBdev2", 00:09:18.204 "uuid": "f5291363-f8e9-4cbe-9cc7-218e80f64258", 00:09:18.205 "is_configured": true, 00:09:18.205 "data_offset": 0, 00:09:18.205 "data_size": 65536 00:09:18.205 }, 00:09:18.205 { 00:09:18.205 "name": "BaseBdev3", 00:09:18.205 "uuid": "c2ca9a58-e209-45e3-aa87-8ada1bd8c3a8", 00:09:18.205 "is_configured": true, 00:09:18.205 "data_offset": 0, 00:09:18.205 "data_size": 65536 00:09:18.205 } 00:09:18.205 ] 00:09:18.205 }' 00:09:18.205 12:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.205 12:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.464 12:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:18.464 12:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.464 12:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.464 12:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.464 12:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.464 12:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:18.464 12:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.464 12:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.464 12:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.464 12:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:18.464 12:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.724 12:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u bdd31ea3-a4d4-48b3-ac46-1933c1a3a3af 00:09:18.724 12:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.724 12:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.724 [2024-11-19 12:01:21.895387] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:18.724 [2024-11-19 12:01:21.895484] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:18.724 [2024-11-19 12:01:21.895499] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:18.724 [2024-11-19 12:01:21.895750] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:18.724 [2024-11-19 12:01:21.895906] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:18.724 [2024-11-19 12:01:21.895915] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:18.724 [2024-11-19 12:01:21.896180] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:18.724 NewBaseBdev 00:09:18.724 12:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.724 12:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:18.724 12:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:18.724 12:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:18.724 12:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:18.724 12:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:18.724 12:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:18.724 12:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:18.724 12:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.724 12:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.724 12:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.724 12:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:18.724 12:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.724 12:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.724 [ 00:09:18.724 { 00:09:18.724 "name": "NewBaseBdev", 00:09:18.724 "aliases": [ 00:09:18.724 "bdd31ea3-a4d4-48b3-ac46-1933c1a3a3af" 00:09:18.724 ], 00:09:18.724 "product_name": "Malloc disk", 00:09:18.724 "block_size": 512, 00:09:18.724 "num_blocks": 65536, 00:09:18.724 "uuid": "bdd31ea3-a4d4-48b3-ac46-1933c1a3a3af", 00:09:18.724 "assigned_rate_limits": { 00:09:18.724 "rw_ios_per_sec": 0, 00:09:18.724 "rw_mbytes_per_sec": 0, 00:09:18.724 "r_mbytes_per_sec": 0, 00:09:18.724 "w_mbytes_per_sec": 0 00:09:18.724 }, 00:09:18.724 "claimed": true, 00:09:18.724 "claim_type": "exclusive_write", 00:09:18.724 "zoned": false, 00:09:18.724 "supported_io_types": { 00:09:18.724 "read": true, 00:09:18.724 "write": true, 00:09:18.724 "unmap": true, 00:09:18.724 "flush": true, 00:09:18.724 "reset": true, 00:09:18.724 "nvme_admin": false, 00:09:18.724 "nvme_io": false, 00:09:18.724 "nvme_io_md": false, 00:09:18.724 "write_zeroes": true, 00:09:18.724 "zcopy": true, 00:09:18.725 "get_zone_info": false, 00:09:18.725 "zone_management": false, 00:09:18.725 "zone_append": false, 00:09:18.725 "compare": false, 00:09:18.725 "compare_and_write": false, 00:09:18.725 "abort": true, 00:09:18.725 "seek_hole": false, 00:09:18.725 "seek_data": false, 00:09:18.725 "copy": true, 00:09:18.725 "nvme_iov_md": false 00:09:18.725 }, 00:09:18.725 "memory_domains": [ 00:09:18.725 { 00:09:18.725 "dma_device_id": "system", 00:09:18.725 "dma_device_type": 1 00:09:18.725 }, 00:09:18.725 { 00:09:18.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.725 "dma_device_type": 2 00:09:18.725 } 00:09:18.725 ], 00:09:18.725 "driver_specific": {} 00:09:18.725 } 00:09:18.725 ] 00:09:18.725 12:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.725 12:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:18.725 12:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:18.725 12:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:18.725 12:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:18.725 12:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:18.725 12:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.725 12:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:18.725 12:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.725 12:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.725 12:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.725 12:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.725 12:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:18.725 12:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.725 12:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.725 12:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.725 12:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.725 12:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.725 "name": "Existed_Raid", 00:09:18.725 "uuid": "9e8b2a9e-3fa1-453d-8501-4cf9d3e6e3b5", 00:09:18.725 "strip_size_kb": 64, 00:09:18.725 "state": "online", 00:09:18.725 "raid_level": "concat", 00:09:18.725 "superblock": false, 00:09:18.725 "num_base_bdevs": 3, 00:09:18.725 "num_base_bdevs_discovered": 3, 00:09:18.725 "num_base_bdevs_operational": 3, 00:09:18.725 "base_bdevs_list": [ 00:09:18.725 { 00:09:18.725 "name": "NewBaseBdev", 00:09:18.725 "uuid": "bdd31ea3-a4d4-48b3-ac46-1933c1a3a3af", 00:09:18.725 "is_configured": true, 00:09:18.725 "data_offset": 0, 00:09:18.725 "data_size": 65536 00:09:18.725 }, 00:09:18.725 { 00:09:18.725 "name": "BaseBdev2", 00:09:18.725 "uuid": "f5291363-f8e9-4cbe-9cc7-218e80f64258", 00:09:18.725 "is_configured": true, 00:09:18.725 "data_offset": 0, 00:09:18.725 "data_size": 65536 00:09:18.725 }, 00:09:18.725 { 00:09:18.725 "name": "BaseBdev3", 00:09:18.725 "uuid": "c2ca9a58-e209-45e3-aa87-8ada1bd8c3a8", 00:09:18.725 "is_configured": true, 00:09:18.725 "data_offset": 0, 00:09:18.725 "data_size": 65536 00:09:18.725 } 00:09:18.725 ] 00:09:18.725 }' 00:09:18.725 12:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.725 12:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.985 12:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:18.985 12:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:18.985 12:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:18.985 12:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:18.985 12:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:18.985 12:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:18.985 12:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:18.985 12:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:18.985 12:01:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.985 12:01:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.985 [2024-11-19 12:01:22.299093] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:18.985 12:01:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.985 12:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:18.985 "name": "Existed_Raid", 00:09:18.985 "aliases": [ 00:09:18.985 "9e8b2a9e-3fa1-453d-8501-4cf9d3e6e3b5" 00:09:18.985 ], 00:09:18.985 "product_name": "Raid Volume", 00:09:18.985 "block_size": 512, 00:09:18.985 "num_blocks": 196608, 00:09:18.985 "uuid": "9e8b2a9e-3fa1-453d-8501-4cf9d3e6e3b5", 00:09:18.985 "assigned_rate_limits": { 00:09:18.985 "rw_ios_per_sec": 0, 00:09:18.985 "rw_mbytes_per_sec": 0, 00:09:18.985 "r_mbytes_per_sec": 0, 00:09:18.985 "w_mbytes_per_sec": 0 00:09:18.985 }, 00:09:18.985 "claimed": false, 00:09:18.985 "zoned": false, 00:09:18.985 "supported_io_types": { 00:09:18.985 "read": true, 00:09:18.985 "write": true, 00:09:18.985 "unmap": true, 00:09:18.985 "flush": true, 00:09:18.985 "reset": true, 00:09:18.985 "nvme_admin": false, 00:09:18.985 "nvme_io": false, 00:09:18.985 "nvme_io_md": false, 00:09:18.985 "write_zeroes": true, 00:09:18.985 "zcopy": false, 00:09:18.985 "get_zone_info": false, 00:09:18.985 "zone_management": false, 00:09:18.985 "zone_append": false, 00:09:18.985 "compare": false, 00:09:18.985 "compare_and_write": false, 00:09:18.985 "abort": false, 00:09:18.985 "seek_hole": false, 00:09:18.985 "seek_data": false, 00:09:18.985 "copy": false, 00:09:18.985 "nvme_iov_md": false 00:09:18.985 }, 00:09:18.985 "memory_domains": [ 00:09:18.985 { 00:09:18.985 "dma_device_id": "system", 00:09:18.985 "dma_device_type": 1 00:09:18.985 }, 00:09:18.985 { 00:09:18.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.985 "dma_device_type": 2 00:09:18.985 }, 00:09:18.985 { 00:09:18.985 "dma_device_id": "system", 00:09:18.985 "dma_device_type": 1 00:09:18.985 }, 00:09:18.985 { 00:09:18.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.985 "dma_device_type": 2 00:09:18.985 }, 00:09:18.985 { 00:09:18.985 "dma_device_id": "system", 00:09:18.985 "dma_device_type": 1 00:09:18.985 }, 00:09:18.985 { 00:09:18.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.985 "dma_device_type": 2 00:09:18.985 } 00:09:18.985 ], 00:09:18.985 "driver_specific": { 00:09:18.985 "raid": { 00:09:18.985 "uuid": "9e8b2a9e-3fa1-453d-8501-4cf9d3e6e3b5", 00:09:18.985 "strip_size_kb": 64, 00:09:18.985 "state": "online", 00:09:18.985 "raid_level": "concat", 00:09:18.985 "superblock": false, 00:09:18.985 "num_base_bdevs": 3, 00:09:18.985 "num_base_bdevs_discovered": 3, 00:09:18.985 "num_base_bdevs_operational": 3, 00:09:18.985 "base_bdevs_list": [ 00:09:18.985 { 00:09:18.985 "name": "NewBaseBdev", 00:09:18.985 "uuid": "bdd31ea3-a4d4-48b3-ac46-1933c1a3a3af", 00:09:18.985 "is_configured": true, 00:09:18.985 "data_offset": 0, 00:09:18.985 "data_size": 65536 00:09:18.985 }, 00:09:18.985 { 00:09:18.985 "name": "BaseBdev2", 00:09:18.985 "uuid": "f5291363-f8e9-4cbe-9cc7-218e80f64258", 00:09:18.985 "is_configured": true, 00:09:18.985 "data_offset": 0, 00:09:18.985 "data_size": 65536 00:09:18.985 }, 00:09:18.985 { 00:09:18.985 "name": "BaseBdev3", 00:09:18.985 "uuid": "c2ca9a58-e209-45e3-aa87-8ada1bd8c3a8", 00:09:18.985 "is_configured": true, 00:09:18.985 "data_offset": 0, 00:09:18.985 "data_size": 65536 00:09:18.985 } 00:09:18.985 ] 00:09:18.985 } 00:09:18.985 } 00:09:18.985 }' 00:09:18.985 12:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:19.245 12:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:19.245 BaseBdev2 00:09:19.245 BaseBdev3' 00:09:19.245 12:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:19.245 12:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:19.245 12:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:19.245 12:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:19.245 12:01:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.245 12:01:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.245 12:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:19.245 12:01:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.245 12:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:19.245 12:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:19.245 12:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:19.245 12:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:19.245 12:01:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.245 12:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:19.245 12:01:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.245 12:01:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.245 12:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:19.245 12:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:19.245 12:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:19.245 12:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:19.245 12:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:19.245 12:01:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.245 12:01:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.245 12:01:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.245 12:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:19.245 12:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:19.245 12:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:19.245 12:01:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.246 12:01:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.246 [2024-11-19 12:01:22.578296] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:19.246 [2024-11-19 12:01:22.578331] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:19.246 [2024-11-19 12:01:22.578422] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:19.246 [2024-11-19 12:01:22.578483] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:19.246 [2024-11-19 12:01:22.578498] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:19.246 12:01:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.246 12:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65651 00:09:19.246 12:01:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 65651 ']' 00:09:19.246 12:01:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 65651 00:09:19.246 12:01:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:19.246 12:01:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:19.246 12:01:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65651 00:09:19.506 12:01:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:19.506 12:01:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:19.506 12:01:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65651' 00:09:19.506 killing process with pid 65651 00:09:19.506 12:01:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 65651 00:09:19.506 [2024-11-19 12:01:22.628287] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:19.506 12:01:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 65651 00:09:19.765 [2024-11-19 12:01:22.932747] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:20.701 12:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:20.701 00:09:20.701 real 0m10.353s 00:09:20.701 user 0m16.376s 00:09:20.701 sys 0m1.790s 00:09:20.701 ************************************ 00:09:20.701 END TEST raid_state_function_test 00:09:20.701 ************************************ 00:09:20.701 12:01:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:20.701 12:01:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.961 12:01:24 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:09:20.961 12:01:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:20.961 12:01:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:20.961 12:01:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:20.961 ************************************ 00:09:20.961 START TEST raid_state_function_test_sb 00:09:20.961 ************************************ 00:09:20.961 12:01:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:09:20.961 12:01:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:20.961 12:01:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:20.961 12:01:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:20.961 12:01:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:20.961 12:01:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:20.961 12:01:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:20.961 12:01:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:20.961 12:01:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:20.961 12:01:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:20.961 12:01:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:20.961 12:01:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:20.961 12:01:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:20.961 12:01:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:20.961 12:01:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:20.961 12:01:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:20.961 12:01:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:20.961 12:01:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:20.961 12:01:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:20.961 12:01:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:20.961 12:01:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:20.961 12:01:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:20.961 12:01:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:20.961 12:01:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:20.961 12:01:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:20.961 12:01:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:20.961 12:01:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:20.961 12:01:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66272 00:09:20.961 12:01:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:20.961 12:01:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66272' 00:09:20.961 Process raid pid: 66272 00:09:20.961 12:01:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66272 00:09:20.961 12:01:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 66272 ']' 00:09:20.961 12:01:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:20.961 12:01:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:20.962 12:01:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:20.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:20.962 12:01:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:20.962 12:01:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.962 [2024-11-19 12:01:24.217919] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:09:20.962 [2024-11-19 12:01:24.218056] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:21.220 [2024-11-19 12:01:24.394142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.221 [2024-11-19 12:01:24.516780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.479 [2024-11-19 12:01:24.719035] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:21.479 [2024-11-19 12:01:24.719160] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:21.740 12:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:21.740 12:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:21.740 12:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:21.740 12:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.740 12:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.740 [2024-11-19 12:01:25.048004] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:21.740 [2024-11-19 12:01:25.048081] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:21.740 [2024-11-19 12:01:25.048092] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:21.740 [2024-11-19 12:01:25.048102] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:21.740 [2024-11-19 12:01:25.048109] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:21.740 [2024-11-19 12:01:25.048117] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:21.740 12:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.740 12:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:21.740 12:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:21.740 12:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:21.740 12:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:21.740 12:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:21.740 12:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:21.740 12:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.740 12:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.740 12:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.740 12:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.740 12:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.740 12:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:21.740 12:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.740 12:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.740 12:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.740 12:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.740 "name": "Existed_Raid", 00:09:21.740 "uuid": "d11d58d7-d4ca-49bf-82af-21d92a284d33", 00:09:21.740 "strip_size_kb": 64, 00:09:21.740 "state": "configuring", 00:09:21.740 "raid_level": "concat", 00:09:21.740 "superblock": true, 00:09:21.740 "num_base_bdevs": 3, 00:09:21.740 "num_base_bdevs_discovered": 0, 00:09:21.740 "num_base_bdevs_operational": 3, 00:09:21.740 "base_bdevs_list": [ 00:09:21.740 { 00:09:21.740 "name": "BaseBdev1", 00:09:21.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.740 "is_configured": false, 00:09:21.740 "data_offset": 0, 00:09:21.740 "data_size": 0 00:09:21.740 }, 00:09:21.740 { 00:09:21.740 "name": "BaseBdev2", 00:09:21.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.740 "is_configured": false, 00:09:21.740 "data_offset": 0, 00:09:21.740 "data_size": 0 00:09:21.740 }, 00:09:21.740 { 00:09:21.740 "name": "BaseBdev3", 00:09:21.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.740 "is_configured": false, 00:09:21.740 "data_offset": 0, 00:09:21.740 "data_size": 0 00:09:21.740 } 00:09:21.740 ] 00:09:21.740 }' 00:09:21.740 12:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.740 12:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.310 12:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:22.310 12:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.310 12:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.310 [2024-11-19 12:01:25.527166] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:22.310 [2024-11-19 12:01:25.527284] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:22.310 12:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.310 12:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:22.310 12:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.310 12:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.310 [2024-11-19 12:01:25.539122] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:22.310 [2024-11-19 12:01:25.539202] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:22.310 [2024-11-19 12:01:25.539229] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:22.310 [2024-11-19 12:01:25.539251] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:22.310 [2024-11-19 12:01:25.539269] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:22.310 [2024-11-19 12:01:25.539289] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:22.310 12:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.310 12:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:22.310 12:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.310 12:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.310 [2024-11-19 12:01:25.587379] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:22.310 BaseBdev1 00:09:22.310 12:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.310 12:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:22.310 12:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:22.310 12:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:22.310 12:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:22.310 12:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:22.310 12:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:22.310 12:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:22.310 12:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.310 12:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.310 12:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.310 12:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:22.310 12:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.310 12:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.310 [ 00:09:22.310 { 00:09:22.310 "name": "BaseBdev1", 00:09:22.310 "aliases": [ 00:09:22.310 "11e5b227-2652-40ee-93d7-5ab2508e205d" 00:09:22.310 ], 00:09:22.310 "product_name": "Malloc disk", 00:09:22.310 "block_size": 512, 00:09:22.310 "num_blocks": 65536, 00:09:22.310 "uuid": "11e5b227-2652-40ee-93d7-5ab2508e205d", 00:09:22.310 "assigned_rate_limits": { 00:09:22.310 "rw_ios_per_sec": 0, 00:09:22.310 "rw_mbytes_per_sec": 0, 00:09:22.310 "r_mbytes_per_sec": 0, 00:09:22.310 "w_mbytes_per_sec": 0 00:09:22.310 }, 00:09:22.310 "claimed": true, 00:09:22.310 "claim_type": "exclusive_write", 00:09:22.310 "zoned": false, 00:09:22.310 "supported_io_types": { 00:09:22.310 "read": true, 00:09:22.310 "write": true, 00:09:22.310 "unmap": true, 00:09:22.310 "flush": true, 00:09:22.310 "reset": true, 00:09:22.310 "nvme_admin": false, 00:09:22.310 "nvme_io": false, 00:09:22.310 "nvme_io_md": false, 00:09:22.310 "write_zeroes": true, 00:09:22.310 "zcopy": true, 00:09:22.310 "get_zone_info": false, 00:09:22.310 "zone_management": false, 00:09:22.310 "zone_append": false, 00:09:22.310 "compare": false, 00:09:22.310 "compare_and_write": false, 00:09:22.310 "abort": true, 00:09:22.310 "seek_hole": false, 00:09:22.310 "seek_data": false, 00:09:22.310 "copy": true, 00:09:22.310 "nvme_iov_md": false 00:09:22.310 }, 00:09:22.310 "memory_domains": [ 00:09:22.310 { 00:09:22.310 "dma_device_id": "system", 00:09:22.310 "dma_device_type": 1 00:09:22.310 }, 00:09:22.310 { 00:09:22.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.310 "dma_device_type": 2 00:09:22.310 } 00:09:22.310 ], 00:09:22.310 "driver_specific": {} 00:09:22.310 } 00:09:22.310 ] 00:09:22.310 12:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.310 12:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:22.310 12:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:22.310 12:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:22.310 12:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:22.310 12:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:22.310 12:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:22.310 12:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:22.310 12:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.310 12:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.310 12:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.310 12:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.310 12:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.310 12:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:22.311 12:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.311 12:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.311 12:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.311 12:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.311 "name": "Existed_Raid", 00:09:22.311 "uuid": "bff5b6c6-890e-4933-a273-d1db3ddfb69d", 00:09:22.311 "strip_size_kb": 64, 00:09:22.311 "state": "configuring", 00:09:22.311 "raid_level": "concat", 00:09:22.311 "superblock": true, 00:09:22.311 "num_base_bdevs": 3, 00:09:22.311 "num_base_bdevs_discovered": 1, 00:09:22.311 "num_base_bdevs_operational": 3, 00:09:22.311 "base_bdevs_list": [ 00:09:22.311 { 00:09:22.311 "name": "BaseBdev1", 00:09:22.311 "uuid": "11e5b227-2652-40ee-93d7-5ab2508e205d", 00:09:22.311 "is_configured": true, 00:09:22.311 "data_offset": 2048, 00:09:22.311 "data_size": 63488 00:09:22.311 }, 00:09:22.311 { 00:09:22.311 "name": "BaseBdev2", 00:09:22.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.311 "is_configured": false, 00:09:22.311 "data_offset": 0, 00:09:22.311 "data_size": 0 00:09:22.311 }, 00:09:22.311 { 00:09:22.311 "name": "BaseBdev3", 00:09:22.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.311 "is_configured": false, 00:09:22.311 "data_offset": 0, 00:09:22.311 "data_size": 0 00:09:22.311 } 00:09:22.311 ] 00:09:22.311 }' 00:09:22.311 12:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.311 12:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.881 12:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:22.881 12:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.881 12:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.881 [2024-11-19 12:01:26.070767] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:22.881 [2024-11-19 12:01:26.070884] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:22.881 12:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.881 12:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:22.881 12:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.881 12:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.881 [2024-11-19 12:01:26.078797] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:22.881 [2024-11-19 12:01:26.080702] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:22.881 [2024-11-19 12:01:26.080747] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:22.881 [2024-11-19 12:01:26.080757] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:22.881 [2024-11-19 12:01:26.080766] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:22.881 12:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.881 12:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:22.881 12:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:22.881 12:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:22.881 12:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:22.881 12:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:22.881 12:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:22.881 12:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:22.881 12:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:22.881 12:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.881 12:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.881 12:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.881 12:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.881 12:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.881 12:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:22.881 12:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.881 12:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.881 12:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.881 12:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.881 "name": "Existed_Raid", 00:09:22.881 "uuid": "6e7d8d1d-77b0-4c17-afa9-bbcd0641849e", 00:09:22.881 "strip_size_kb": 64, 00:09:22.881 "state": "configuring", 00:09:22.881 "raid_level": "concat", 00:09:22.881 "superblock": true, 00:09:22.881 "num_base_bdevs": 3, 00:09:22.881 "num_base_bdevs_discovered": 1, 00:09:22.881 "num_base_bdevs_operational": 3, 00:09:22.881 "base_bdevs_list": [ 00:09:22.881 { 00:09:22.881 "name": "BaseBdev1", 00:09:22.881 "uuid": "11e5b227-2652-40ee-93d7-5ab2508e205d", 00:09:22.881 "is_configured": true, 00:09:22.881 "data_offset": 2048, 00:09:22.881 "data_size": 63488 00:09:22.881 }, 00:09:22.881 { 00:09:22.881 "name": "BaseBdev2", 00:09:22.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.882 "is_configured": false, 00:09:22.882 "data_offset": 0, 00:09:22.882 "data_size": 0 00:09:22.882 }, 00:09:22.882 { 00:09:22.882 "name": "BaseBdev3", 00:09:22.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.882 "is_configured": false, 00:09:22.882 "data_offset": 0, 00:09:22.882 "data_size": 0 00:09:22.882 } 00:09:22.882 ] 00:09:22.882 }' 00:09:22.882 12:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.882 12:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.451 12:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:23.451 12:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.451 12:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.451 [2024-11-19 12:01:26.621699] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:23.451 BaseBdev2 00:09:23.451 12:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.451 12:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:23.451 12:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:23.451 12:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:23.451 12:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:23.451 12:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:23.451 12:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:23.451 12:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:23.451 12:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.451 12:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.451 12:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.451 12:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:23.451 12:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.451 12:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.451 [ 00:09:23.451 { 00:09:23.452 "name": "BaseBdev2", 00:09:23.452 "aliases": [ 00:09:23.452 "1fc573bd-1022-4bd4-8542-63268b49998a" 00:09:23.452 ], 00:09:23.452 "product_name": "Malloc disk", 00:09:23.452 "block_size": 512, 00:09:23.452 "num_blocks": 65536, 00:09:23.452 "uuid": "1fc573bd-1022-4bd4-8542-63268b49998a", 00:09:23.452 "assigned_rate_limits": { 00:09:23.452 "rw_ios_per_sec": 0, 00:09:23.452 "rw_mbytes_per_sec": 0, 00:09:23.452 "r_mbytes_per_sec": 0, 00:09:23.452 "w_mbytes_per_sec": 0 00:09:23.452 }, 00:09:23.452 "claimed": true, 00:09:23.452 "claim_type": "exclusive_write", 00:09:23.452 "zoned": false, 00:09:23.452 "supported_io_types": { 00:09:23.452 "read": true, 00:09:23.452 "write": true, 00:09:23.452 "unmap": true, 00:09:23.452 "flush": true, 00:09:23.452 "reset": true, 00:09:23.452 "nvme_admin": false, 00:09:23.452 "nvme_io": false, 00:09:23.452 "nvme_io_md": false, 00:09:23.452 "write_zeroes": true, 00:09:23.452 "zcopy": true, 00:09:23.452 "get_zone_info": false, 00:09:23.452 "zone_management": false, 00:09:23.452 "zone_append": false, 00:09:23.452 "compare": false, 00:09:23.452 "compare_and_write": false, 00:09:23.452 "abort": true, 00:09:23.452 "seek_hole": false, 00:09:23.452 "seek_data": false, 00:09:23.452 "copy": true, 00:09:23.452 "nvme_iov_md": false 00:09:23.452 }, 00:09:23.452 "memory_domains": [ 00:09:23.452 { 00:09:23.452 "dma_device_id": "system", 00:09:23.452 "dma_device_type": 1 00:09:23.452 }, 00:09:23.452 { 00:09:23.452 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.452 "dma_device_type": 2 00:09:23.452 } 00:09:23.452 ], 00:09:23.452 "driver_specific": {} 00:09:23.452 } 00:09:23.452 ] 00:09:23.452 12:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.452 12:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:23.452 12:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:23.452 12:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:23.452 12:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:23.452 12:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.452 12:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:23.452 12:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:23.452 12:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:23.452 12:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.452 12:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.452 12:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.452 12:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.452 12:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.452 12:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.452 12:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.452 12:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.452 12:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.452 12:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.452 12:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.452 "name": "Existed_Raid", 00:09:23.452 "uuid": "6e7d8d1d-77b0-4c17-afa9-bbcd0641849e", 00:09:23.452 "strip_size_kb": 64, 00:09:23.452 "state": "configuring", 00:09:23.452 "raid_level": "concat", 00:09:23.452 "superblock": true, 00:09:23.452 "num_base_bdevs": 3, 00:09:23.452 "num_base_bdevs_discovered": 2, 00:09:23.452 "num_base_bdevs_operational": 3, 00:09:23.452 "base_bdevs_list": [ 00:09:23.452 { 00:09:23.452 "name": "BaseBdev1", 00:09:23.452 "uuid": "11e5b227-2652-40ee-93d7-5ab2508e205d", 00:09:23.452 "is_configured": true, 00:09:23.452 "data_offset": 2048, 00:09:23.452 "data_size": 63488 00:09:23.452 }, 00:09:23.452 { 00:09:23.452 "name": "BaseBdev2", 00:09:23.452 "uuid": "1fc573bd-1022-4bd4-8542-63268b49998a", 00:09:23.452 "is_configured": true, 00:09:23.452 "data_offset": 2048, 00:09:23.452 "data_size": 63488 00:09:23.452 }, 00:09:23.452 { 00:09:23.452 "name": "BaseBdev3", 00:09:23.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.452 "is_configured": false, 00:09:23.452 "data_offset": 0, 00:09:23.452 "data_size": 0 00:09:23.452 } 00:09:23.452 ] 00:09:23.452 }' 00:09:23.452 12:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.452 12:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.712 12:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:23.712 12:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.712 12:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.972 [2024-11-19 12:01:27.118054] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:23.972 [2024-11-19 12:01:27.118303] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:23.972 [2024-11-19 12:01:27.118326] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:23.972 [2024-11-19 12:01:27.118587] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:23.972 BaseBdev3 00:09:23.972 [2024-11-19 12:01:27.118735] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:23.972 [2024-11-19 12:01:27.118750] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:23.972 [2024-11-19 12:01:27.118906] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:23.972 12:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.972 12:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:23.972 12:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:23.972 12:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:23.972 12:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:23.972 12:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:23.972 12:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:23.972 12:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:23.972 12:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.972 12:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.972 12:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.972 12:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:23.972 12:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.972 12:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.972 [ 00:09:23.972 { 00:09:23.972 "name": "BaseBdev3", 00:09:23.972 "aliases": [ 00:09:23.972 "09463786-8f68-4a20-a8c1-f572deb2b26a" 00:09:23.972 ], 00:09:23.972 "product_name": "Malloc disk", 00:09:23.972 "block_size": 512, 00:09:23.972 "num_blocks": 65536, 00:09:23.972 "uuid": "09463786-8f68-4a20-a8c1-f572deb2b26a", 00:09:23.972 "assigned_rate_limits": { 00:09:23.972 "rw_ios_per_sec": 0, 00:09:23.972 "rw_mbytes_per_sec": 0, 00:09:23.972 "r_mbytes_per_sec": 0, 00:09:23.972 "w_mbytes_per_sec": 0 00:09:23.972 }, 00:09:23.972 "claimed": true, 00:09:23.972 "claim_type": "exclusive_write", 00:09:23.972 "zoned": false, 00:09:23.972 "supported_io_types": { 00:09:23.972 "read": true, 00:09:23.972 "write": true, 00:09:23.972 "unmap": true, 00:09:23.972 "flush": true, 00:09:23.972 "reset": true, 00:09:23.972 "nvme_admin": false, 00:09:23.972 "nvme_io": false, 00:09:23.972 "nvme_io_md": false, 00:09:23.972 "write_zeroes": true, 00:09:23.972 "zcopy": true, 00:09:23.972 "get_zone_info": false, 00:09:23.972 "zone_management": false, 00:09:23.972 "zone_append": false, 00:09:23.972 "compare": false, 00:09:23.972 "compare_and_write": false, 00:09:23.972 "abort": true, 00:09:23.972 "seek_hole": false, 00:09:23.972 "seek_data": false, 00:09:23.972 "copy": true, 00:09:23.972 "nvme_iov_md": false 00:09:23.972 }, 00:09:23.972 "memory_domains": [ 00:09:23.972 { 00:09:23.972 "dma_device_id": "system", 00:09:23.972 "dma_device_type": 1 00:09:23.972 }, 00:09:23.972 { 00:09:23.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.972 "dma_device_type": 2 00:09:23.972 } 00:09:23.972 ], 00:09:23.972 "driver_specific": {} 00:09:23.972 } 00:09:23.972 ] 00:09:23.972 12:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.972 12:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:23.972 12:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:23.972 12:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:23.972 12:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:23.972 12:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.972 12:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:23.972 12:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:23.972 12:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:23.972 12:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.972 12:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.972 12:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.972 12:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.972 12:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.972 12:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.972 12:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.972 12:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.972 12:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.972 12:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.972 12:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.972 "name": "Existed_Raid", 00:09:23.972 "uuid": "6e7d8d1d-77b0-4c17-afa9-bbcd0641849e", 00:09:23.972 "strip_size_kb": 64, 00:09:23.972 "state": "online", 00:09:23.972 "raid_level": "concat", 00:09:23.972 "superblock": true, 00:09:23.972 "num_base_bdevs": 3, 00:09:23.972 "num_base_bdevs_discovered": 3, 00:09:23.972 "num_base_bdevs_operational": 3, 00:09:23.972 "base_bdevs_list": [ 00:09:23.972 { 00:09:23.972 "name": "BaseBdev1", 00:09:23.972 "uuid": "11e5b227-2652-40ee-93d7-5ab2508e205d", 00:09:23.972 "is_configured": true, 00:09:23.972 "data_offset": 2048, 00:09:23.972 "data_size": 63488 00:09:23.972 }, 00:09:23.972 { 00:09:23.972 "name": "BaseBdev2", 00:09:23.972 "uuid": "1fc573bd-1022-4bd4-8542-63268b49998a", 00:09:23.972 "is_configured": true, 00:09:23.972 "data_offset": 2048, 00:09:23.972 "data_size": 63488 00:09:23.972 }, 00:09:23.972 { 00:09:23.972 "name": "BaseBdev3", 00:09:23.972 "uuid": "09463786-8f68-4a20-a8c1-f572deb2b26a", 00:09:23.972 "is_configured": true, 00:09:23.972 "data_offset": 2048, 00:09:23.972 "data_size": 63488 00:09:23.972 } 00:09:23.972 ] 00:09:23.972 }' 00:09:23.972 12:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.972 12:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.232 12:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:24.232 12:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:24.232 12:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:24.232 12:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:24.232 12:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:24.232 12:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:24.492 12:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:24.492 12:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.492 12:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.492 12:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:24.492 [2024-11-19 12:01:27.613539] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:24.492 12:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.492 12:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:24.492 "name": "Existed_Raid", 00:09:24.492 "aliases": [ 00:09:24.492 "6e7d8d1d-77b0-4c17-afa9-bbcd0641849e" 00:09:24.492 ], 00:09:24.492 "product_name": "Raid Volume", 00:09:24.492 "block_size": 512, 00:09:24.492 "num_blocks": 190464, 00:09:24.492 "uuid": "6e7d8d1d-77b0-4c17-afa9-bbcd0641849e", 00:09:24.492 "assigned_rate_limits": { 00:09:24.492 "rw_ios_per_sec": 0, 00:09:24.492 "rw_mbytes_per_sec": 0, 00:09:24.492 "r_mbytes_per_sec": 0, 00:09:24.492 "w_mbytes_per_sec": 0 00:09:24.492 }, 00:09:24.492 "claimed": false, 00:09:24.492 "zoned": false, 00:09:24.492 "supported_io_types": { 00:09:24.492 "read": true, 00:09:24.492 "write": true, 00:09:24.492 "unmap": true, 00:09:24.492 "flush": true, 00:09:24.492 "reset": true, 00:09:24.492 "nvme_admin": false, 00:09:24.492 "nvme_io": false, 00:09:24.492 "nvme_io_md": false, 00:09:24.492 "write_zeroes": true, 00:09:24.492 "zcopy": false, 00:09:24.492 "get_zone_info": false, 00:09:24.492 "zone_management": false, 00:09:24.492 "zone_append": false, 00:09:24.492 "compare": false, 00:09:24.492 "compare_and_write": false, 00:09:24.492 "abort": false, 00:09:24.492 "seek_hole": false, 00:09:24.492 "seek_data": false, 00:09:24.492 "copy": false, 00:09:24.492 "nvme_iov_md": false 00:09:24.492 }, 00:09:24.492 "memory_domains": [ 00:09:24.492 { 00:09:24.492 "dma_device_id": "system", 00:09:24.492 "dma_device_type": 1 00:09:24.492 }, 00:09:24.492 { 00:09:24.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.492 "dma_device_type": 2 00:09:24.492 }, 00:09:24.492 { 00:09:24.492 "dma_device_id": "system", 00:09:24.492 "dma_device_type": 1 00:09:24.492 }, 00:09:24.492 { 00:09:24.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.492 "dma_device_type": 2 00:09:24.492 }, 00:09:24.492 { 00:09:24.492 "dma_device_id": "system", 00:09:24.492 "dma_device_type": 1 00:09:24.492 }, 00:09:24.492 { 00:09:24.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.492 "dma_device_type": 2 00:09:24.492 } 00:09:24.492 ], 00:09:24.492 "driver_specific": { 00:09:24.492 "raid": { 00:09:24.492 "uuid": "6e7d8d1d-77b0-4c17-afa9-bbcd0641849e", 00:09:24.492 "strip_size_kb": 64, 00:09:24.492 "state": "online", 00:09:24.492 "raid_level": "concat", 00:09:24.492 "superblock": true, 00:09:24.492 "num_base_bdevs": 3, 00:09:24.492 "num_base_bdevs_discovered": 3, 00:09:24.492 "num_base_bdevs_operational": 3, 00:09:24.492 "base_bdevs_list": [ 00:09:24.492 { 00:09:24.492 "name": "BaseBdev1", 00:09:24.492 "uuid": "11e5b227-2652-40ee-93d7-5ab2508e205d", 00:09:24.492 "is_configured": true, 00:09:24.492 "data_offset": 2048, 00:09:24.492 "data_size": 63488 00:09:24.492 }, 00:09:24.492 { 00:09:24.492 "name": "BaseBdev2", 00:09:24.492 "uuid": "1fc573bd-1022-4bd4-8542-63268b49998a", 00:09:24.492 "is_configured": true, 00:09:24.492 "data_offset": 2048, 00:09:24.492 "data_size": 63488 00:09:24.492 }, 00:09:24.492 { 00:09:24.492 "name": "BaseBdev3", 00:09:24.492 "uuid": "09463786-8f68-4a20-a8c1-f572deb2b26a", 00:09:24.492 "is_configured": true, 00:09:24.492 "data_offset": 2048, 00:09:24.492 "data_size": 63488 00:09:24.492 } 00:09:24.492 ] 00:09:24.492 } 00:09:24.492 } 00:09:24.492 }' 00:09:24.492 12:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:24.492 12:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:24.492 BaseBdev2 00:09:24.492 BaseBdev3' 00:09:24.492 12:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:24.492 12:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:24.492 12:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:24.492 12:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:24.492 12:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:24.492 12:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.492 12:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.492 12:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.492 12:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:24.492 12:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:24.492 12:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:24.492 12:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:24.492 12:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:24.492 12:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.492 12:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.492 12:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.492 12:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:24.492 12:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:24.492 12:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:24.492 12:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:24.492 12:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:24.492 12:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.492 12:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.492 12:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.752 12:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:24.752 12:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:24.752 12:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:24.752 12:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.752 12:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.752 [2024-11-19 12:01:27.880815] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:24.752 [2024-11-19 12:01:27.880858] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:24.752 [2024-11-19 12:01:27.880908] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:24.752 12:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.752 12:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:24.752 12:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:24.752 12:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:24.752 12:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:24.752 12:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:24.752 12:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:24.752 12:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:24.752 12:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:24.752 12:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:24.752 12:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:24.752 12:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:24.752 12:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.752 12:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.752 12:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.752 12:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.752 12:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.752 12:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.752 12:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.752 12:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.752 12:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.752 12:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.752 "name": "Existed_Raid", 00:09:24.752 "uuid": "6e7d8d1d-77b0-4c17-afa9-bbcd0641849e", 00:09:24.752 "strip_size_kb": 64, 00:09:24.752 "state": "offline", 00:09:24.752 "raid_level": "concat", 00:09:24.752 "superblock": true, 00:09:24.752 "num_base_bdevs": 3, 00:09:24.752 "num_base_bdevs_discovered": 2, 00:09:24.752 "num_base_bdevs_operational": 2, 00:09:24.752 "base_bdevs_list": [ 00:09:24.752 { 00:09:24.752 "name": null, 00:09:24.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.752 "is_configured": false, 00:09:24.752 "data_offset": 0, 00:09:24.752 "data_size": 63488 00:09:24.752 }, 00:09:24.752 { 00:09:24.752 "name": "BaseBdev2", 00:09:24.752 "uuid": "1fc573bd-1022-4bd4-8542-63268b49998a", 00:09:24.752 "is_configured": true, 00:09:24.752 "data_offset": 2048, 00:09:24.752 "data_size": 63488 00:09:24.752 }, 00:09:24.752 { 00:09:24.752 "name": "BaseBdev3", 00:09:24.752 "uuid": "09463786-8f68-4a20-a8c1-f572deb2b26a", 00:09:24.752 "is_configured": true, 00:09:24.752 "data_offset": 2048, 00:09:24.752 "data_size": 63488 00:09:24.752 } 00:09:24.752 ] 00:09:24.752 }' 00:09:24.753 12:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.753 12:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.346 12:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:25.346 12:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:25.346 12:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.346 12:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:25.346 12:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.346 12:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.346 12:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.346 12:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:25.346 12:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:25.346 12:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:25.346 12:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.346 12:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.346 [2024-11-19 12:01:28.523248] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:25.346 12:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.346 12:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:25.346 12:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:25.346 12:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.346 12:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:25.346 12:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.346 12:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.346 12:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.346 12:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:25.346 12:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:25.346 12:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:25.346 12:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.346 12:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.346 [2024-11-19 12:01:28.675893] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:25.346 [2024-11-19 12:01:28.676069] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:25.606 12:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.606 12:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:25.606 12:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:25.606 12:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:25.606 12:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.606 12:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.606 12:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.606 12:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.606 12:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:25.606 12:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:25.606 12:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:25.606 12:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:25.606 12:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:25.606 12:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:25.606 12:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.606 12:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.606 BaseBdev2 00:09:25.606 12:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.606 12:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:25.606 12:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:25.606 12:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:25.606 12:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:25.606 12:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:25.606 12:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:25.606 12:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:25.606 12:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.606 12:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.606 12:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.606 12:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:25.606 12:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.606 12:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.606 [ 00:09:25.606 { 00:09:25.606 "name": "BaseBdev2", 00:09:25.606 "aliases": [ 00:09:25.606 "67132d6b-f395-43ba-a6cd-1e2591d56fb2" 00:09:25.606 ], 00:09:25.606 "product_name": "Malloc disk", 00:09:25.606 "block_size": 512, 00:09:25.606 "num_blocks": 65536, 00:09:25.606 "uuid": "67132d6b-f395-43ba-a6cd-1e2591d56fb2", 00:09:25.606 "assigned_rate_limits": { 00:09:25.606 "rw_ios_per_sec": 0, 00:09:25.606 "rw_mbytes_per_sec": 0, 00:09:25.606 "r_mbytes_per_sec": 0, 00:09:25.606 "w_mbytes_per_sec": 0 00:09:25.606 }, 00:09:25.606 "claimed": false, 00:09:25.606 "zoned": false, 00:09:25.606 "supported_io_types": { 00:09:25.606 "read": true, 00:09:25.606 "write": true, 00:09:25.606 "unmap": true, 00:09:25.606 "flush": true, 00:09:25.606 "reset": true, 00:09:25.606 "nvme_admin": false, 00:09:25.606 "nvme_io": false, 00:09:25.606 "nvme_io_md": false, 00:09:25.606 "write_zeroes": true, 00:09:25.606 "zcopy": true, 00:09:25.606 "get_zone_info": false, 00:09:25.606 "zone_management": false, 00:09:25.606 "zone_append": false, 00:09:25.606 "compare": false, 00:09:25.606 "compare_and_write": false, 00:09:25.606 "abort": true, 00:09:25.606 "seek_hole": false, 00:09:25.606 "seek_data": false, 00:09:25.606 "copy": true, 00:09:25.606 "nvme_iov_md": false 00:09:25.606 }, 00:09:25.606 "memory_domains": [ 00:09:25.606 { 00:09:25.606 "dma_device_id": "system", 00:09:25.606 "dma_device_type": 1 00:09:25.606 }, 00:09:25.606 { 00:09:25.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.606 "dma_device_type": 2 00:09:25.606 } 00:09:25.606 ], 00:09:25.606 "driver_specific": {} 00:09:25.606 } 00:09:25.606 ] 00:09:25.606 12:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.606 12:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:25.606 12:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:25.606 12:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:25.606 12:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:25.606 12:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.606 12:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.606 BaseBdev3 00:09:25.606 12:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.606 12:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:25.606 12:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:25.606 12:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:25.606 12:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:25.606 12:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:25.606 12:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:25.606 12:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:25.606 12:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.606 12:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.606 12:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.606 12:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:25.606 12:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.606 12:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.606 [ 00:09:25.606 { 00:09:25.606 "name": "BaseBdev3", 00:09:25.607 "aliases": [ 00:09:25.607 "3518396f-b165-4883-909f-e6a5dd50fc77" 00:09:25.607 ], 00:09:25.607 "product_name": "Malloc disk", 00:09:25.607 "block_size": 512, 00:09:25.607 "num_blocks": 65536, 00:09:25.607 "uuid": "3518396f-b165-4883-909f-e6a5dd50fc77", 00:09:25.607 "assigned_rate_limits": { 00:09:25.607 "rw_ios_per_sec": 0, 00:09:25.607 "rw_mbytes_per_sec": 0, 00:09:25.607 "r_mbytes_per_sec": 0, 00:09:25.607 "w_mbytes_per_sec": 0 00:09:25.607 }, 00:09:25.607 "claimed": false, 00:09:25.607 "zoned": false, 00:09:25.607 "supported_io_types": { 00:09:25.607 "read": true, 00:09:25.607 "write": true, 00:09:25.607 "unmap": true, 00:09:25.607 "flush": true, 00:09:25.607 "reset": true, 00:09:25.607 "nvme_admin": false, 00:09:25.607 "nvme_io": false, 00:09:25.607 "nvme_io_md": false, 00:09:25.607 "write_zeroes": true, 00:09:25.607 "zcopy": true, 00:09:25.607 "get_zone_info": false, 00:09:25.607 "zone_management": false, 00:09:25.607 "zone_append": false, 00:09:25.607 "compare": false, 00:09:25.607 "compare_and_write": false, 00:09:25.607 "abort": true, 00:09:25.607 "seek_hole": false, 00:09:25.607 "seek_data": false, 00:09:25.607 "copy": true, 00:09:25.607 "nvme_iov_md": false 00:09:25.607 }, 00:09:25.607 "memory_domains": [ 00:09:25.607 { 00:09:25.607 "dma_device_id": "system", 00:09:25.607 "dma_device_type": 1 00:09:25.607 }, 00:09:25.607 { 00:09:25.607 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.607 "dma_device_type": 2 00:09:25.607 } 00:09:25.607 ], 00:09:25.607 "driver_specific": {} 00:09:25.607 } 00:09:25.607 ] 00:09:25.607 12:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.607 12:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:25.607 12:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:25.607 12:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:25.607 12:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:25.607 12:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.607 12:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.607 [2024-11-19 12:01:28.963957] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:25.607 [2024-11-19 12:01:28.964028] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:25.607 [2024-11-19 12:01:28.964052] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:25.607 [2024-11-19 12:01:28.965843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:25.607 12:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.607 12:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:25.607 12:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:25.607 12:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:25.607 12:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:25.607 12:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:25.607 12:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:25.607 12:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.607 12:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.607 12:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.607 12:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.607 12:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.607 12:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:25.607 12:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.607 12:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.866 12:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.866 12:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.866 "name": "Existed_Raid", 00:09:25.866 "uuid": "0e731cd6-be04-47ca-b85c-b9864eef095d", 00:09:25.867 "strip_size_kb": 64, 00:09:25.867 "state": "configuring", 00:09:25.867 "raid_level": "concat", 00:09:25.867 "superblock": true, 00:09:25.867 "num_base_bdevs": 3, 00:09:25.867 "num_base_bdevs_discovered": 2, 00:09:25.867 "num_base_bdevs_operational": 3, 00:09:25.867 "base_bdevs_list": [ 00:09:25.867 { 00:09:25.867 "name": "BaseBdev1", 00:09:25.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.867 "is_configured": false, 00:09:25.867 "data_offset": 0, 00:09:25.867 "data_size": 0 00:09:25.867 }, 00:09:25.867 { 00:09:25.867 "name": "BaseBdev2", 00:09:25.867 "uuid": "67132d6b-f395-43ba-a6cd-1e2591d56fb2", 00:09:25.867 "is_configured": true, 00:09:25.867 "data_offset": 2048, 00:09:25.867 "data_size": 63488 00:09:25.867 }, 00:09:25.867 { 00:09:25.867 "name": "BaseBdev3", 00:09:25.867 "uuid": "3518396f-b165-4883-909f-e6a5dd50fc77", 00:09:25.867 "is_configured": true, 00:09:25.867 "data_offset": 2048, 00:09:25.867 "data_size": 63488 00:09:25.867 } 00:09:25.867 ] 00:09:25.867 }' 00:09:25.867 12:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.867 12:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.126 12:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:26.126 12:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.126 12:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.126 [2024-11-19 12:01:29.435208] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:26.126 12:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.126 12:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:26.126 12:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:26.126 12:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:26.126 12:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:26.126 12:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:26.126 12:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:26.126 12:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.126 12:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.126 12:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.126 12:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.126 12:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.126 12:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:26.126 12:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.126 12:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.126 12:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.126 12:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.126 "name": "Existed_Raid", 00:09:26.126 "uuid": "0e731cd6-be04-47ca-b85c-b9864eef095d", 00:09:26.126 "strip_size_kb": 64, 00:09:26.126 "state": "configuring", 00:09:26.126 "raid_level": "concat", 00:09:26.126 "superblock": true, 00:09:26.126 "num_base_bdevs": 3, 00:09:26.126 "num_base_bdevs_discovered": 1, 00:09:26.126 "num_base_bdevs_operational": 3, 00:09:26.126 "base_bdevs_list": [ 00:09:26.126 { 00:09:26.126 "name": "BaseBdev1", 00:09:26.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.126 "is_configured": false, 00:09:26.126 "data_offset": 0, 00:09:26.126 "data_size": 0 00:09:26.126 }, 00:09:26.126 { 00:09:26.126 "name": null, 00:09:26.126 "uuid": "67132d6b-f395-43ba-a6cd-1e2591d56fb2", 00:09:26.126 "is_configured": false, 00:09:26.126 "data_offset": 0, 00:09:26.126 "data_size": 63488 00:09:26.126 }, 00:09:26.126 { 00:09:26.126 "name": "BaseBdev3", 00:09:26.126 "uuid": "3518396f-b165-4883-909f-e6a5dd50fc77", 00:09:26.126 "is_configured": true, 00:09:26.126 "data_offset": 2048, 00:09:26.126 "data_size": 63488 00:09:26.126 } 00:09:26.126 ] 00:09:26.126 }' 00:09:26.126 12:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.126 12:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.695 12:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.695 12:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:26.695 12:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.695 12:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.695 12:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.695 12:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:26.695 12:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:26.695 12:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.695 12:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.695 [2024-11-19 12:01:29.976086] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:26.695 BaseBdev1 00:09:26.695 12:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.695 12:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:26.695 12:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:26.695 12:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:26.695 12:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:26.695 12:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:26.695 12:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:26.695 12:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:26.695 12:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.695 12:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.695 12:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.695 12:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:26.695 12:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.695 12:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.695 [ 00:09:26.695 { 00:09:26.695 "name": "BaseBdev1", 00:09:26.695 "aliases": [ 00:09:26.695 "10ac1847-cc81-4e80-ba18-de8e2da0deaa" 00:09:26.695 ], 00:09:26.695 "product_name": "Malloc disk", 00:09:26.695 "block_size": 512, 00:09:26.695 "num_blocks": 65536, 00:09:26.695 "uuid": "10ac1847-cc81-4e80-ba18-de8e2da0deaa", 00:09:26.695 "assigned_rate_limits": { 00:09:26.695 "rw_ios_per_sec": 0, 00:09:26.695 "rw_mbytes_per_sec": 0, 00:09:26.695 "r_mbytes_per_sec": 0, 00:09:26.695 "w_mbytes_per_sec": 0 00:09:26.695 }, 00:09:26.695 "claimed": true, 00:09:26.695 "claim_type": "exclusive_write", 00:09:26.695 "zoned": false, 00:09:26.695 "supported_io_types": { 00:09:26.695 "read": true, 00:09:26.695 "write": true, 00:09:26.695 "unmap": true, 00:09:26.695 "flush": true, 00:09:26.695 "reset": true, 00:09:26.695 "nvme_admin": false, 00:09:26.695 "nvme_io": false, 00:09:26.695 "nvme_io_md": false, 00:09:26.695 "write_zeroes": true, 00:09:26.695 "zcopy": true, 00:09:26.695 "get_zone_info": false, 00:09:26.695 "zone_management": false, 00:09:26.695 "zone_append": false, 00:09:26.695 "compare": false, 00:09:26.695 "compare_and_write": false, 00:09:26.695 "abort": true, 00:09:26.695 "seek_hole": false, 00:09:26.695 "seek_data": false, 00:09:26.695 "copy": true, 00:09:26.695 "nvme_iov_md": false 00:09:26.695 }, 00:09:26.695 "memory_domains": [ 00:09:26.695 { 00:09:26.695 "dma_device_id": "system", 00:09:26.696 "dma_device_type": 1 00:09:26.696 }, 00:09:26.696 { 00:09:26.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.696 "dma_device_type": 2 00:09:26.696 } 00:09:26.696 ], 00:09:26.696 "driver_specific": {} 00:09:26.696 } 00:09:26.696 ] 00:09:26.696 12:01:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.696 12:01:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:26.696 12:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:26.696 12:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:26.696 12:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:26.696 12:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:26.696 12:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:26.696 12:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:26.696 12:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.696 12:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.696 12:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.696 12:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.696 12:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.696 12:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:26.696 12:01:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.696 12:01:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.696 12:01:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.696 12:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.696 "name": "Existed_Raid", 00:09:26.696 "uuid": "0e731cd6-be04-47ca-b85c-b9864eef095d", 00:09:26.696 "strip_size_kb": 64, 00:09:26.696 "state": "configuring", 00:09:26.696 "raid_level": "concat", 00:09:26.696 "superblock": true, 00:09:26.696 "num_base_bdevs": 3, 00:09:26.696 "num_base_bdevs_discovered": 2, 00:09:26.696 "num_base_bdevs_operational": 3, 00:09:26.696 "base_bdevs_list": [ 00:09:26.696 { 00:09:26.696 "name": "BaseBdev1", 00:09:26.696 "uuid": "10ac1847-cc81-4e80-ba18-de8e2da0deaa", 00:09:26.696 "is_configured": true, 00:09:26.696 "data_offset": 2048, 00:09:26.696 "data_size": 63488 00:09:26.696 }, 00:09:26.696 { 00:09:26.696 "name": null, 00:09:26.696 "uuid": "67132d6b-f395-43ba-a6cd-1e2591d56fb2", 00:09:26.696 "is_configured": false, 00:09:26.696 "data_offset": 0, 00:09:26.696 "data_size": 63488 00:09:26.696 }, 00:09:26.696 { 00:09:26.696 "name": "BaseBdev3", 00:09:26.696 "uuid": "3518396f-b165-4883-909f-e6a5dd50fc77", 00:09:26.696 "is_configured": true, 00:09:26.696 "data_offset": 2048, 00:09:26.696 "data_size": 63488 00:09:26.696 } 00:09:26.696 ] 00:09:26.696 }' 00:09:26.696 12:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.696 12:01:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.265 12:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.265 12:01:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.265 12:01:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.265 12:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:27.265 12:01:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.265 12:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:27.265 12:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:27.265 12:01:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.265 12:01:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.265 [2024-11-19 12:01:30.551179] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:27.265 12:01:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.265 12:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:27.265 12:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:27.265 12:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:27.265 12:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:27.265 12:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:27.265 12:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:27.265 12:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.265 12:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.265 12:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.265 12:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.265 12:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.265 12:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.265 12:01:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.265 12:01:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.265 12:01:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.265 12:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.265 "name": "Existed_Raid", 00:09:27.265 "uuid": "0e731cd6-be04-47ca-b85c-b9864eef095d", 00:09:27.265 "strip_size_kb": 64, 00:09:27.265 "state": "configuring", 00:09:27.265 "raid_level": "concat", 00:09:27.265 "superblock": true, 00:09:27.265 "num_base_bdevs": 3, 00:09:27.265 "num_base_bdevs_discovered": 1, 00:09:27.265 "num_base_bdevs_operational": 3, 00:09:27.265 "base_bdevs_list": [ 00:09:27.265 { 00:09:27.265 "name": "BaseBdev1", 00:09:27.265 "uuid": "10ac1847-cc81-4e80-ba18-de8e2da0deaa", 00:09:27.265 "is_configured": true, 00:09:27.265 "data_offset": 2048, 00:09:27.265 "data_size": 63488 00:09:27.265 }, 00:09:27.265 { 00:09:27.265 "name": null, 00:09:27.265 "uuid": "67132d6b-f395-43ba-a6cd-1e2591d56fb2", 00:09:27.265 "is_configured": false, 00:09:27.265 "data_offset": 0, 00:09:27.265 "data_size": 63488 00:09:27.265 }, 00:09:27.265 { 00:09:27.265 "name": null, 00:09:27.265 "uuid": "3518396f-b165-4883-909f-e6a5dd50fc77", 00:09:27.265 "is_configured": false, 00:09:27.265 "data_offset": 0, 00:09:27.265 "data_size": 63488 00:09:27.265 } 00:09:27.265 ] 00:09:27.265 }' 00:09:27.265 12:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.265 12:01:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.834 12:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.834 12:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:27.834 12:01:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.834 12:01:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.834 12:01:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.834 12:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:27.834 12:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:27.834 12:01:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.834 12:01:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.834 [2024-11-19 12:01:31.062400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:27.834 12:01:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.834 12:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:27.834 12:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:27.835 12:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:27.835 12:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:27.835 12:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:27.835 12:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:27.835 12:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.835 12:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.835 12:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.835 12:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.835 12:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.835 12:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.835 12:01:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.835 12:01:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.835 12:01:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.835 12:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.835 "name": "Existed_Raid", 00:09:27.835 "uuid": "0e731cd6-be04-47ca-b85c-b9864eef095d", 00:09:27.835 "strip_size_kb": 64, 00:09:27.835 "state": "configuring", 00:09:27.835 "raid_level": "concat", 00:09:27.835 "superblock": true, 00:09:27.835 "num_base_bdevs": 3, 00:09:27.835 "num_base_bdevs_discovered": 2, 00:09:27.835 "num_base_bdevs_operational": 3, 00:09:27.835 "base_bdevs_list": [ 00:09:27.835 { 00:09:27.835 "name": "BaseBdev1", 00:09:27.835 "uuid": "10ac1847-cc81-4e80-ba18-de8e2da0deaa", 00:09:27.835 "is_configured": true, 00:09:27.835 "data_offset": 2048, 00:09:27.835 "data_size": 63488 00:09:27.835 }, 00:09:27.835 { 00:09:27.835 "name": null, 00:09:27.835 "uuid": "67132d6b-f395-43ba-a6cd-1e2591d56fb2", 00:09:27.835 "is_configured": false, 00:09:27.835 "data_offset": 0, 00:09:27.835 "data_size": 63488 00:09:27.835 }, 00:09:27.835 { 00:09:27.835 "name": "BaseBdev3", 00:09:27.835 "uuid": "3518396f-b165-4883-909f-e6a5dd50fc77", 00:09:27.835 "is_configured": true, 00:09:27.835 "data_offset": 2048, 00:09:27.835 "data_size": 63488 00:09:27.835 } 00:09:27.835 ] 00:09:27.835 }' 00:09:27.835 12:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.835 12:01:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.404 12:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.404 12:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:28.404 12:01:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.404 12:01:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.404 12:01:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.404 12:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:28.404 12:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:28.404 12:01:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.404 12:01:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.404 [2024-11-19 12:01:31.573528] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:28.404 12:01:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.404 12:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:28.404 12:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.404 12:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:28.404 12:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:28.404 12:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:28.404 12:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:28.404 12:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.404 12:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.404 12:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.404 12:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.404 12:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.404 12:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.404 12:01:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.404 12:01:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.404 12:01:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.404 12:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.404 "name": "Existed_Raid", 00:09:28.404 "uuid": "0e731cd6-be04-47ca-b85c-b9864eef095d", 00:09:28.404 "strip_size_kb": 64, 00:09:28.404 "state": "configuring", 00:09:28.404 "raid_level": "concat", 00:09:28.404 "superblock": true, 00:09:28.404 "num_base_bdevs": 3, 00:09:28.404 "num_base_bdevs_discovered": 1, 00:09:28.404 "num_base_bdevs_operational": 3, 00:09:28.404 "base_bdevs_list": [ 00:09:28.404 { 00:09:28.404 "name": null, 00:09:28.404 "uuid": "10ac1847-cc81-4e80-ba18-de8e2da0deaa", 00:09:28.404 "is_configured": false, 00:09:28.404 "data_offset": 0, 00:09:28.404 "data_size": 63488 00:09:28.404 }, 00:09:28.404 { 00:09:28.404 "name": null, 00:09:28.404 "uuid": "67132d6b-f395-43ba-a6cd-1e2591d56fb2", 00:09:28.404 "is_configured": false, 00:09:28.404 "data_offset": 0, 00:09:28.404 "data_size": 63488 00:09:28.404 }, 00:09:28.404 { 00:09:28.404 "name": "BaseBdev3", 00:09:28.404 "uuid": "3518396f-b165-4883-909f-e6a5dd50fc77", 00:09:28.404 "is_configured": true, 00:09:28.404 "data_offset": 2048, 00:09:28.404 "data_size": 63488 00:09:28.404 } 00:09:28.404 ] 00:09:28.404 }' 00:09:28.404 12:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.404 12:01:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.973 12:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:28.973 12:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.973 12:01:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.973 12:01:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.973 12:01:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.973 12:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:28.973 12:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:28.973 12:01:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.973 12:01:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.973 [2024-11-19 12:01:32.106105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:28.973 12:01:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.973 12:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:28.973 12:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.973 12:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:28.973 12:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:28.973 12:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:28.973 12:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:28.973 12:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.973 12:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.973 12:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.973 12:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.973 12:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.973 12:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.973 12:01:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.973 12:01:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.973 12:01:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.973 12:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.973 "name": "Existed_Raid", 00:09:28.973 "uuid": "0e731cd6-be04-47ca-b85c-b9864eef095d", 00:09:28.973 "strip_size_kb": 64, 00:09:28.973 "state": "configuring", 00:09:28.973 "raid_level": "concat", 00:09:28.973 "superblock": true, 00:09:28.973 "num_base_bdevs": 3, 00:09:28.973 "num_base_bdevs_discovered": 2, 00:09:28.973 "num_base_bdevs_operational": 3, 00:09:28.973 "base_bdevs_list": [ 00:09:28.973 { 00:09:28.973 "name": null, 00:09:28.973 "uuid": "10ac1847-cc81-4e80-ba18-de8e2da0deaa", 00:09:28.973 "is_configured": false, 00:09:28.973 "data_offset": 0, 00:09:28.973 "data_size": 63488 00:09:28.973 }, 00:09:28.973 { 00:09:28.973 "name": "BaseBdev2", 00:09:28.973 "uuid": "67132d6b-f395-43ba-a6cd-1e2591d56fb2", 00:09:28.973 "is_configured": true, 00:09:28.973 "data_offset": 2048, 00:09:28.973 "data_size": 63488 00:09:28.973 }, 00:09:28.973 { 00:09:28.973 "name": "BaseBdev3", 00:09:28.973 "uuid": "3518396f-b165-4883-909f-e6a5dd50fc77", 00:09:28.973 "is_configured": true, 00:09:28.973 "data_offset": 2048, 00:09:28.973 "data_size": 63488 00:09:28.973 } 00:09:28.973 ] 00:09:28.973 }' 00:09:28.973 12:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.973 12:01:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.234 12:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.234 12:01:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.234 12:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:29.234 12:01:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.234 12:01:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.234 12:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:29.234 12:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:29.234 12:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.234 12:01:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.234 12:01:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.234 12:01:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.494 12:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 10ac1847-cc81-4e80-ba18-de8e2da0deaa 00:09:29.494 12:01:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.494 12:01:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.494 [2024-11-19 12:01:32.666511] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:29.494 [2024-11-19 12:01:32.666844] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:29.494 [2024-11-19 12:01:32.666896] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:29.494 [2024-11-19 12:01:32.667197] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:29.494 [2024-11-19 12:01:32.667393] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:29.494 [2024-11-19 12:01:32.667437] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:29.494 NewBaseBdev 00:09:29.494 [2024-11-19 12:01:32.667612] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:29.494 12:01:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.494 12:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:29.494 12:01:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:29.494 12:01:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:29.494 12:01:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:29.494 12:01:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:29.494 12:01:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:29.494 12:01:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:29.494 12:01:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.494 12:01:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.494 12:01:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.494 12:01:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:29.494 12:01:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.494 12:01:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.494 [ 00:09:29.494 { 00:09:29.494 "name": "NewBaseBdev", 00:09:29.494 "aliases": [ 00:09:29.494 "10ac1847-cc81-4e80-ba18-de8e2da0deaa" 00:09:29.494 ], 00:09:29.494 "product_name": "Malloc disk", 00:09:29.494 "block_size": 512, 00:09:29.494 "num_blocks": 65536, 00:09:29.494 "uuid": "10ac1847-cc81-4e80-ba18-de8e2da0deaa", 00:09:29.494 "assigned_rate_limits": { 00:09:29.494 "rw_ios_per_sec": 0, 00:09:29.494 "rw_mbytes_per_sec": 0, 00:09:29.494 "r_mbytes_per_sec": 0, 00:09:29.494 "w_mbytes_per_sec": 0 00:09:29.494 }, 00:09:29.494 "claimed": true, 00:09:29.494 "claim_type": "exclusive_write", 00:09:29.494 "zoned": false, 00:09:29.494 "supported_io_types": { 00:09:29.494 "read": true, 00:09:29.494 "write": true, 00:09:29.494 "unmap": true, 00:09:29.494 "flush": true, 00:09:29.494 "reset": true, 00:09:29.494 "nvme_admin": false, 00:09:29.494 "nvme_io": false, 00:09:29.494 "nvme_io_md": false, 00:09:29.494 "write_zeroes": true, 00:09:29.494 "zcopy": true, 00:09:29.494 "get_zone_info": false, 00:09:29.494 "zone_management": false, 00:09:29.494 "zone_append": false, 00:09:29.494 "compare": false, 00:09:29.494 "compare_and_write": false, 00:09:29.494 "abort": true, 00:09:29.494 "seek_hole": false, 00:09:29.494 "seek_data": false, 00:09:29.494 "copy": true, 00:09:29.494 "nvme_iov_md": false 00:09:29.494 }, 00:09:29.494 "memory_domains": [ 00:09:29.494 { 00:09:29.494 "dma_device_id": "system", 00:09:29.494 "dma_device_type": 1 00:09:29.494 }, 00:09:29.494 { 00:09:29.494 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.494 "dma_device_type": 2 00:09:29.494 } 00:09:29.494 ], 00:09:29.494 "driver_specific": {} 00:09:29.494 } 00:09:29.494 ] 00:09:29.494 12:01:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.494 12:01:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:29.494 12:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:29.494 12:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.494 12:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:29.494 12:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:29.494 12:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:29.494 12:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:29.494 12:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.494 12:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.494 12:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.494 12:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.494 12:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.494 12:01:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.494 12:01:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.494 12:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.494 12:01:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.494 12:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.494 "name": "Existed_Raid", 00:09:29.494 "uuid": "0e731cd6-be04-47ca-b85c-b9864eef095d", 00:09:29.494 "strip_size_kb": 64, 00:09:29.494 "state": "online", 00:09:29.495 "raid_level": "concat", 00:09:29.495 "superblock": true, 00:09:29.495 "num_base_bdevs": 3, 00:09:29.495 "num_base_bdevs_discovered": 3, 00:09:29.495 "num_base_bdevs_operational": 3, 00:09:29.495 "base_bdevs_list": [ 00:09:29.495 { 00:09:29.495 "name": "NewBaseBdev", 00:09:29.495 "uuid": "10ac1847-cc81-4e80-ba18-de8e2da0deaa", 00:09:29.495 "is_configured": true, 00:09:29.495 "data_offset": 2048, 00:09:29.495 "data_size": 63488 00:09:29.495 }, 00:09:29.495 { 00:09:29.495 "name": "BaseBdev2", 00:09:29.495 "uuid": "67132d6b-f395-43ba-a6cd-1e2591d56fb2", 00:09:29.495 "is_configured": true, 00:09:29.495 "data_offset": 2048, 00:09:29.495 "data_size": 63488 00:09:29.495 }, 00:09:29.495 { 00:09:29.495 "name": "BaseBdev3", 00:09:29.495 "uuid": "3518396f-b165-4883-909f-e6a5dd50fc77", 00:09:29.495 "is_configured": true, 00:09:29.495 "data_offset": 2048, 00:09:29.495 "data_size": 63488 00:09:29.495 } 00:09:29.495 ] 00:09:29.495 }' 00:09:29.495 12:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.495 12:01:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.065 12:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:30.065 12:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:30.065 12:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:30.065 12:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:30.065 12:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:30.065 12:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:30.065 12:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:30.065 12:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:30.065 12:01:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.065 12:01:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.065 [2024-11-19 12:01:33.178021] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:30.065 12:01:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.065 12:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:30.065 "name": "Existed_Raid", 00:09:30.065 "aliases": [ 00:09:30.065 "0e731cd6-be04-47ca-b85c-b9864eef095d" 00:09:30.065 ], 00:09:30.065 "product_name": "Raid Volume", 00:09:30.065 "block_size": 512, 00:09:30.065 "num_blocks": 190464, 00:09:30.065 "uuid": "0e731cd6-be04-47ca-b85c-b9864eef095d", 00:09:30.065 "assigned_rate_limits": { 00:09:30.065 "rw_ios_per_sec": 0, 00:09:30.065 "rw_mbytes_per_sec": 0, 00:09:30.065 "r_mbytes_per_sec": 0, 00:09:30.065 "w_mbytes_per_sec": 0 00:09:30.065 }, 00:09:30.065 "claimed": false, 00:09:30.065 "zoned": false, 00:09:30.065 "supported_io_types": { 00:09:30.065 "read": true, 00:09:30.065 "write": true, 00:09:30.065 "unmap": true, 00:09:30.065 "flush": true, 00:09:30.065 "reset": true, 00:09:30.065 "nvme_admin": false, 00:09:30.065 "nvme_io": false, 00:09:30.065 "nvme_io_md": false, 00:09:30.065 "write_zeroes": true, 00:09:30.065 "zcopy": false, 00:09:30.065 "get_zone_info": false, 00:09:30.065 "zone_management": false, 00:09:30.065 "zone_append": false, 00:09:30.065 "compare": false, 00:09:30.065 "compare_and_write": false, 00:09:30.065 "abort": false, 00:09:30.065 "seek_hole": false, 00:09:30.065 "seek_data": false, 00:09:30.065 "copy": false, 00:09:30.065 "nvme_iov_md": false 00:09:30.065 }, 00:09:30.065 "memory_domains": [ 00:09:30.065 { 00:09:30.065 "dma_device_id": "system", 00:09:30.065 "dma_device_type": 1 00:09:30.065 }, 00:09:30.065 { 00:09:30.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.065 "dma_device_type": 2 00:09:30.065 }, 00:09:30.065 { 00:09:30.065 "dma_device_id": "system", 00:09:30.065 "dma_device_type": 1 00:09:30.065 }, 00:09:30.065 { 00:09:30.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.065 "dma_device_type": 2 00:09:30.065 }, 00:09:30.065 { 00:09:30.065 "dma_device_id": "system", 00:09:30.065 "dma_device_type": 1 00:09:30.065 }, 00:09:30.065 { 00:09:30.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.065 "dma_device_type": 2 00:09:30.065 } 00:09:30.065 ], 00:09:30.065 "driver_specific": { 00:09:30.065 "raid": { 00:09:30.065 "uuid": "0e731cd6-be04-47ca-b85c-b9864eef095d", 00:09:30.065 "strip_size_kb": 64, 00:09:30.065 "state": "online", 00:09:30.065 "raid_level": "concat", 00:09:30.065 "superblock": true, 00:09:30.065 "num_base_bdevs": 3, 00:09:30.065 "num_base_bdevs_discovered": 3, 00:09:30.065 "num_base_bdevs_operational": 3, 00:09:30.065 "base_bdevs_list": [ 00:09:30.065 { 00:09:30.065 "name": "NewBaseBdev", 00:09:30.065 "uuid": "10ac1847-cc81-4e80-ba18-de8e2da0deaa", 00:09:30.065 "is_configured": true, 00:09:30.065 "data_offset": 2048, 00:09:30.065 "data_size": 63488 00:09:30.065 }, 00:09:30.065 { 00:09:30.065 "name": "BaseBdev2", 00:09:30.065 "uuid": "67132d6b-f395-43ba-a6cd-1e2591d56fb2", 00:09:30.065 "is_configured": true, 00:09:30.065 "data_offset": 2048, 00:09:30.065 "data_size": 63488 00:09:30.065 }, 00:09:30.065 { 00:09:30.065 "name": "BaseBdev3", 00:09:30.065 "uuid": "3518396f-b165-4883-909f-e6a5dd50fc77", 00:09:30.065 "is_configured": true, 00:09:30.065 "data_offset": 2048, 00:09:30.065 "data_size": 63488 00:09:30.065 } 00:09:30.065 ] 00:09:30.065 } 00:09:30.065 } 00:09:30.065 }' 00:09:30.065 12:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:30.065 12:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:30.065 BaseBdev2 00:09:30.065 BaseBdev3' 00:09:30.065 12:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:30.065 12:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:30.065 12:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:30.065 12:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:30.065 12:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:30.065 12:01:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.065 12:01:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.065 12:01:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.065 12:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:30.065 12:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:30.065 12:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:30.066 12:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:30.066 12:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:30.066 12:01:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.066 12:01:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.066 12:01:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.066 12:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:30.066 12:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:30.066 12:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:30.066 12:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:30.066 12:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:30.066 12:01:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.066 12:01:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.066 12:01:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.066 12:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:30.066 12:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:30.066 12:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:30.066 12:01:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.066 12:01:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.066 [2024-11-19 12:01:33.429250] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:30.066 [2024-11-19 12:01:33.429361] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:30.066 [2024-11-19 12:01:33.429440] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:30.066 [2024-11-19 12:01:33.429497] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:30.066 [2024-11-19 12:01:33.429509] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:30.066 12:01:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.066 12:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66272 00:09:30.066 12:01:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 66272 ']' 00:09:30.066 12:01:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 66272 00:09:30.066 12:01:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:30.326 12:01:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:30.326 12:01:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66272 00:09:30.326 killing process with pid 66272 00:09:30.326 12:01:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:30.326 12:01:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:30.326 12:01:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66272' 00:09:30.326 12:01:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 66272 00:09:30.326 12:01:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 66272 00:09:30.326 [2024-11-19 12:01:33.461973] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:30.585 [2024-11-19 12:01:33.760171] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:31.528 ************************************ 00:09:31.528 END TEST raid_state_function_test_sb 00:09:31.528 ************************************ 00:09:31.528 12:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:31.528 00:09:31.528 real 0m10.731s 00:09:31.528 user 0m17.200s 00:09:31.528 sys 0m1.829s 00:09:31.528 12:01:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:31.528 12:01:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.788 12:01:34 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:09:31.788 12:01:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:31.788 12:01:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:31.788 12:01:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:31.788 ************************************ 00:09:31.788 START TEST raid_superblock_test 00:09:31.788 ************************************ 00:09:31.788 12:01:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:09:31.788 12:01:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:09:31.788 12:01:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:31.788 12:01:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:31.788 12:01:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:31.788 12:01:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:31.788 12:01:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:31.788 12:01:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:31.788 12:01:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:31.788 12:01:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:31.788 12:01:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:31.788 12:01:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:31.788 12:01:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:31.788 12:01:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:31.788 12:01:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:09:31.788 12:01:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:31.788 12:01:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:31.788 12:01:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=66898 00:09:31.788 12:01:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:31.788 12:01:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 66898 00:09:31.788 12:01:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 66898 ']' 00:09:31.788 12:01:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.788 12:01:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:31.788 12:01:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.788 12:01:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:31.788 12:01:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.788 [2024-11-19 12:01:35.022130] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:09:31.788 [2024-11-19 12:01:35.022889] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66898 ] 00:09:32.048 [2024-11-19 12:01:35.185921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.048 [2024-11-19 12:01:35.301020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.307 [2024-11-19 12:01:35.495798] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:32.307 [2024-11-19 12:01:35.495937] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:32.567 12:01:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:32.567 12:01:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:32.567 12:01:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:32.567 12:01:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:32.567 12:01:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:32.567 12:01:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:32.567 12:01:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:32.567 12:01:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:32.567 12:01:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:32.567 12:01:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:32.567 12:01:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:32.567 12:01:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.567 12:01:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.567 malloc1 00:09:32.567 12:01:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.567 12:01:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:32.567 12:01:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.567 12:01:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.567 [2024-11-19 12:01:35.889153] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:32.567 [2024-11-19 12:01:35.889222] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:32.567 [2024-11-19 12:01:35.889245] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:32.567 [2024-11-19 12:01:35.889254] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:32.567 [2024-11-19 12:01:35.891310] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:32.567 [2024-11-19 12:01:35.891349] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:32.567 pt1 00:09:32.567 12:01:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.567 12:01:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:32.567 12:01:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:32.567 12:01:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:32.567 12:01:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:32.567 12:01:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:32.567 12:01:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:32.567 12:01:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:32.567 12:01:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:32.567 12:01:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:32.567 12:01:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.567 12:01:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.567 malloc2 00:09:32.567 12:01:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.567 12:01:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:32.567 12:01:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.567 12:01:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.828 [2024-11-19 12:01:35.947046] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:32.828 [2024-11-19 12:01:35.947188] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:32.828 [2024-11-19 12:01:35.947226] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:32.828 [2024-11-19 12:01:35.947257] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:32.828 [2024-11-19 12:01:35.949286] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:32.828 [2024-11-19 12:01:35.949354] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:32.828 pt2 00:09:32.828 12:01:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.828 12:01:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:32.828 12:01:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:32.828 12:01:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:32.828 12:01:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:32.828 12:01:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:32.828 12:01:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:32.828 12:01:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:32.828 12:01:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:32.828 12:01:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:32.828 12:01:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.828 12:01:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.828 malloc3 00:09:32.828 12:01:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.828 12:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:32.828 12:01:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.828 12:01:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.828 [2024-11-19 12:01:36.010725] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:32.828 [2024-11-19 12:01:36.010782] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:32.828 [2024-11-19 12:01:36.010804] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:32.828 [2024-11-19 12:01:36.010813] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:32.828 [2024-11-19 12:01:36.012887] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:32.828 [2024-11-19 12:01:36.013008] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:32.828 pt3 00:09:32.828 12:01:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.828 12:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:32.828 12:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:32.828 12:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:32.828 12:01:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.828 12:01:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.828 [2024-11-19 12:01:36.022756] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:32.828 [2024-11-19 12:01:36.024578] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:32.828 [2024-11-19 12:01:36.024700] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:32.828 [2024-11-19 12:01:36.024860] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:32.828 [2024-11-19 12:01:36.024875] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:32.828 [2024-11-19 12:01:36.025125] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:32.828 [2024-11-19 12:01:36.025281] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:32.828 [2024-11-19 12:01:36.025292] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:32.828 [2024-11-19 12:01:36.025454] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:32.828 12:01:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.828 12:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:32.828 12:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:32.828 12:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:32.828 12:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:32.828 12:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:32.828 12:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:32.828 12:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.828 12:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.828 12:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.828 12:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.828 12:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.828 12:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:32.828 12:01:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.828 12:01:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.828 12:01:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.828 12:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.828 "name": "raid_bdev1", 00:09:32.828 "uuid": "acda38b2-dfd5-4668-9695-2096ef95c366", 00:09:32.828 "strip_size_kb": 64, 00:09:32.828 "state": "online", 00:09:32.828 "raid_level": "concat", 00:09:32.828 "superblock": true, 00:09:32.828 "num_base_bdevs": 3, 00:09:32.828 "num_base_bdevs_discovered": 3, 00:09:32.828 "num_base_bdevs_operational": 3, 00:09:32.828 "base_bdevs_list": [ 00:09:32.828 { 00:09:32.828 "name": "pt1", 00:09:32.828 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:32.828 "is_configured": true, 00:09:32.828 "data_offset": 2048, 00:09:32.828 "data_size": 63488 00:09:32.828 }, 00:09:32.828 { 00:09:32.828 "name": "pt2", 00:09:32.828 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:32.828 "is_configured": true, 00:09:32.828 "data_offset": 2048, 00:09:32.828 "data_size": 63488 00:09:32.828 }, 00:09:32.829 { 00:09:32.829 "name": "pt3", 00:09:32.829 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:32.829 "is_configured": true, 00:09:32.829 "data_offset": 2048, 00:09:32.829 "data_size": 63488 00:09:32.829 } 00:09:32.829 ] 00:09:32.829 }' 00:09:32.829 12:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.829 12:01:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.398 12:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:33.398 12:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:33.398 12:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:33.398 12:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:33.398 12:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:33.398 12:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:33.398 12:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:33.398 12:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:33.398 12:01:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.398 12:01:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.398 [2024-11-19 12:01:36.494256] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:33.398 12:01:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.398 12:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:33.398 "name": "raid_bdev1", 00:09:33.398 "aliases": [ 00:09:33.398 "acda38b2-dfd5-4668-9695-2096ef95c366" 00:09:33.398 ], 00:09:33.398 "product_name": "Raid Volume", 00:09:33.398 "block_size": 512, 00:09:33.398 "num_blocks": 190464, 00:09:33.398 "uuid": "acda38b2-dfd5-4668-9695-2096ef95c366", 00:09:33.398 "assigned_rate_limits": { 00:09:33.398 "rw_ios_per_sec": 0, 00:09:33.398 "rw_mbytes_per_sec": 0, 00:09:33.398 "r_mbytes_per_sec": 0, 00:09:33.398 "w_mbytes_per_sec": 0 00:09:33.398 }, 00:09:33.398 "claimed": false, 00:09:33.398 "zoned": false, 00:09:33.398 "supported_io_types": { 00:09:33.398 "read": true, 00:09:33.398 "write": true, 00:09:33.398 "unmap": true, 00:09:33.398 "flush": true, 00:09:33.398 "reset": true, 00:09:33.398 "nvme_admin": false, 00:09:33.398 "nvme_io": false, 00:09:33.398 "nvme_io_md": false, 00:09:33.398 "write_zeroes": true, 00:09:33.398 "zcopy": false, 00:09:33.398 "get_zone_info": false, 00:09:33.398 "zone_management": false, 00:09:33.398 "zone_append": false, 00:09:33.398 "compare": false, 00:09:33.398 "compare_and_write": false, 00:09:33.398 "abort": false, 00:09:33.398 "seek_hole": false, 00:09:33.398 "seek_data": false, 00:09:33.398 "copy": false, 00:09:33.398 "nvme_iov_md": false 00:09:33.398 }, 00:09:33.398 "memory_domains": [ 00:09:33.398 { 00:09:33.398 "dma_device_id": "system", 00:09:33.398 "dma_device_type": 1 00:09:33.398 }, 00:09:33.398 { 00:09:33.398 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.398 "dma_device_type": 2 00:09:33.398 }, 00:09:33.398 { 00:09:33.398 "dma_device_id": "system", 00:09:33.398 "dma_device_type": 1 00:09:33.398 }, 00:09:33.398 { 00:09:33.398 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.398 "dma_device_type": 2 00:09:33.398 }, 00:09:33.398 { 00:09:33.398 "dma_device_id": "system", 00:09:33.398 "dma_device_type": 1 00:09:33.398 }, 00:09:33.398 { 00:09:33.398 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.398 "dma_device_type": 2 00:09:33.398 } 00:09:33.398 ], 00:09:33.398 "driver_specific": { 00:09:33.398 "raid": { 00:09:33.398 "uuid": "acda38b2-dfd5-4668-9695-2096ef95c366", 00:09:33.398 "strip_size_kb": 64, 00:09:33.398 "state": "online", 00:09:33.398 "raid_level": "concat", 00:09:33.398 "superblock": true, 00:09:33.398 "num_base_bdevs": 3, 00:09:33.398 "num_base_bdevs_discovered": 3, 00:09:33.398 "num_base_bdevs_operational": 3, 00:09:33.398 "base_bdevs_list": [ 00:09:33.398 { 00:09:33.398 "name": "pt1", 00:09:33.398 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:33.398 "is_configured": true, 00:09:33.398 "data_offset": 2048, 00:09:33.399 "data_size": 63488 00:09:33.399 }, 00:09:33.399 { 00:09:33.399 "name": "pt2", 00:09:33.399 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:33.399 "is_configured": true, 00:09:33.399 "data_offset": 2048, 00:09:33.399 "data_size": 63488 00:09:33.399 }, 00:09:33.399 { 00:09:33.399 "name": "pt3", 00:09:33.399 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:33.399 "is_configured": true, 00:09:33.399 "data_offset": 2048, 00:09:33.399 "data_size": 63488 00:09:33.399 } 00:09:33.399 ] 00:09:33.399 } 00:09:33.399 } 00:09:33.399 }' 00:09:33.399 12:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:33.399 12:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:33.399 pt2 00:09:33.399 pt3' 00:09:33.399 12:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:33.399 12:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:33.399 12:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:33.399 12:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:33.399 12:01:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.399 12:01:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.399 12:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:33.399 12:01:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.399 12:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:33.399 12:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:33.399 12:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:33.399 12:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:33.399 12:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:33.399 12:01:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.399 12:01:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.399 12:01:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.399 12:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:33.399 12:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:33.399 12:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:33.399 12:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:33.399 12:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:33.399 12:01:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.399 12:01:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.399 12:01:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.399 12:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:33.399 12:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:33.399 12:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:33.399 12:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:33.399 12:01:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.399 12:01:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.399 [2024-11-19 12:01:36.769688] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:33.660 12:01:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.660 12:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=acda38b2-dfd5-4668-9695-2096ef95c366 00:09:33.660 12:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z acda38b2-dfd5-4668-9695-2096ef95c366 ']' 00:09:33.660 12:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:33.660 12:01:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.660 12:01:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.660 [2024-11-19 12:01:36.809345] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:33.660 [2024-11-19 12:01:36.809433] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:33.660 [2024-11-19 12:01:36.809526] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:33.660 [2024-11-19 12:01:36.809604] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:33.660 [2024-11-19 12:01:36.809637] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:33.660 12:01:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.660 12:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.660 12:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:33.660 12:01:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.660 12:01:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.660 12:01:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.660 12:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:33.660 12:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:33.660 12:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:33.660 12:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:33.660 12:01:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.660 12:01:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.660 12:01:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.660 12:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:33.660 12:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:33.660 12:01:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.660 12:01:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.660 12:01:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.660 12:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:33.660 12:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:33.660 12:01:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.660 12:01:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.660 12:01:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.660 12:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:33.660 12:01:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.660 12:01:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.660 12:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:33.660 12:01:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.660 12:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:33.660 12:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:33.660 12:01:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:33.660 12:01:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:33.660 12:01:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:33.660 12:01:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:33.660 12:01:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:33.660 12:01:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:33.660 12:01:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:33.660 12:01:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.660 12:01:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.660 [2024-11-19 12:01:36.953156] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:33.660 [2024-11-19 12:01:36.955096] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:33.660 [2024-11-19 12:01:36.955196] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:33.660 [2024-11-19 12:01:36.955266] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:33.660 [2024-11-19 12:01:36.955355] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:33.660 [2024-11-19 12:01:36.955421] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:33.660 [2024-11-19 12:01:36.955498] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:33.660 [2024-11-19 12:01:36.955542] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:33.660 request: 00:09:33.660 { 00:09:33.660 "name": "raid_bdev1", 00:09:33.660 "raid_level": "concat", 00:09:33.660 "base_bdevs": [ 00:09:33.660 "malloc1", 00:09:33.660 "malloc2", 00:09:33.660 "malloc3" 00:09:33.660 ], 00:09:33.660 "strip_size_kb": 64, 00:09:33.660 "superblock": false, 00:09:33.660 "method": "bdev_raid_create", 00:09:33.660 "req_id": 1 00:09:33.660 } 00:09:33.660 Got JSON-RPC error response 00:09:33.660 response: 00:09:33.660 { 00:09:33.660 "code": -17, 00:09:33.660 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:33.660 } 00:09:33.660 12:01:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:33.660 12:01:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:33.660 12:01:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:33.660 12:01:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:33.660 12:01:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:33.660 12:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.660 12:01:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.660 12:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:33.660 12:01:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.660 12:01:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.660 12:01:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:33.660 12:01:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:33.660 12:01:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:33.660 12:01:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.660 12:01:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.660 [2024-11-19 12:01:37.020983] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:33.660 [2024-11-19 12:01:37.021041] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:33.660 [2024-11-19 12:01:37.021059] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:33.660 [2024-11-19 12:01:37.021067] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:33.660 [2024-11-19 12:01:37.023168] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:33.660 [2024-11-19 12:01:37.023202] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:33.660 [2024-11-19 12:01:37.023276] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:33.660 [2024-11-19 12:01:37.023323] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:33.660 pt1 00:09:33.660 12:01:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.660 12:01:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:33.660 12:01:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:33.660 12:01:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:33.660 12:01:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:33.660 12:01:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:33.660 12:01:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:33.660 12:01:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.660 12:01:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.660 12:01:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.660 12:01:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.921 12:01:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.921 12:01:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:33.921 12:01:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.921 12:01:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.921 12:01:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.921 12:01:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.921 "name": "raid_bdev1", 00:09:33.921 "uuid": "acda38b2-dfd5-4668-9695-2096ef95c366", 00:09:33.921 "strip_size_kb": 64, 00:09:33.921 "state": "configuring", 00:09:33.921 "raid_level": "concat", 00:09:33.921 "superblock": true, 00:09:33.921 "num_base_bdevs": 3, 00:09:33.921 "num_base_bdevs_discovered": 1, 00:09:33.921 "num_base_bdevs_operational": 3, 00:09:33.921 "base_bdevs_list": [ 00:09:33.921 { 00:09:33.921 "name": "pt1", 00:09:33.921 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:33.921 "is_configured": true, 00:09:33.921 "data_offset": 2048, 00:09:33.921 "data_size": 63488 00:09:33.921 }, 00:09:33.921 { 00:09:33.921 "name": null, 00:09:33.921 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:33.921 "is_configured": false, 00:09:33.921 "data_offset": 2048, 00:09:33.921 "data_size": 63488 00:09:33.921 }, 00:09:33.921 { 00:09:33.921 "name": null, 00:09:33.921 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:33.921 "is_configured": false, 00:09:33.921 "data_offset": 2048, 00:09:33.921 "data_size": 63488 00:09:33.921 } 00:09:33.921 ] 00:09:33.921 }' 00:09:33.921 12:01:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.921 12:01:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.181 12:01:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:34.181 12:01:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:34.181 12:01:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.181 12:01:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.181 [2024-11-19 12:01:37.488215] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:34.181 [2024-11-19 12:01:37.488288] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:34.181 [2024-11-19 12:01:37.488311] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:34.181 [2024-11-19 12:01:37.488319] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:34.181 [2024-11-19 12:01:37.488757] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:34.181 [2024-11-19 12:01:37.488775] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:34.181 [2024-11-19 12:01:37.488860] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:34.181 [2024-11-19 12:01:37.488881] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:34.181 pt2 00:09:34.181 12:01:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.181 12:01:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:34.181 12:01:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.181 12:01:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.181 [2024-11-19 12:01:37.496201] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:34.181 12:01:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.181 12:01:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:34.181 12:01:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:34.181 12:01:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.181 12:01:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:34.181 12:01:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:34.181 12:01:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:34.181 12:01:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.181 12:01:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.181 12:01:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.181 12:01:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.181 12:01:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.181 12:01:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.181 12:01:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.181 12:01:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:34.181 12:01:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.181 12:01:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.181 "name": "raid_bdev1", 00:09:34.181 "uuid": "acda38b2-dfd5-4668-9695-2096ef95c366", 00:09:34.181 "strip_size_kb": 64, 00:09:34.181 "state": "configuring", 00:09:34.181 "raid_level": "concat", 00:09:34.181 "superblock": true, 00:09:34.181 "num_base_bdevs": 3, 00:09:34.181 "num_base_bdevs_discovered": 1, 00:09:34.181 "num_base_bdevs_operational": 3, 00:09:34.181 "base_bdevs_list": [ 00:09:34.181 { 00:09:34.181 "name": "pt1", 00:09:34.181 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:34.181 "is_configured": true, 00:09:34.181 "data_offset": 2048, 00:09:34.181 "data_size": 63488 00:09:34.181 }, 00:09:34.181 { 00:09:34.181 "name": null, 00:09:34.181 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:34.181 "is_configured": false, 00:09:34.181 "data_offset": 0, 00:09:34.181 "data_size": 63488 00:09:34.181 }, 00:09:34.181 { 00:09:34.181 "name": null, 00:09:34.181 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:34.181 "is_configured": false, 00:09:34.181 "data_offset": 2048, 00:09:34.181 "data_size": 63488 00:09:34.181 } 00:09:34.181 ] 00:09:34.181 }' 00:09:34.181 12:01:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.181 12:01:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.752 12:01:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:34.752 12:01:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:34.752 12:01:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:34.752 12:01:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.752 12:01:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.752 [2024-11-19 12:01:37.947397] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:34.752 [2024-11-19 12:01:37.947472] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:34.752 [2024-11-19 12:01:37.947490] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:34.752 [2024-11-19 12:01:37.947501] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:34.752 [2024-11-19 12:01:37.947961] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:34.752 [2024-11-19 12:01:37.947987] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:34.752 [2024-11-19 12:01:37.948089] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:34.752 [2024-11-19 12:01:37.948116] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:34.752 pt2 00:09:34.752 12:01:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.752 12:01:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:34.752 12:01:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:34.752 12:01:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:34.752 12:01:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.752 12:01:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.752 [2024-11-19 12:01:37.955357] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:34.752 [2024-11-19 12:01:37.955407] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:34.752 [2024-11-19 12:01:37.955420] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:34.752 [2024-11-19 12:01:37.955430] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:34.752 [2024-11-19 12:01:37.955781] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:34.752 [2024-11-19 12:01:37.955808] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:34.752 [2024-11-19 12:01:37.955868] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:34.752 [2024-11-19 12:01:37.955888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:34.752 [2024-11-19 12:01:37.956034] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:34.752 [2024-11-19 12:01:37.956052] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:34.752 [2024-11-19 12:01:37.956296] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:34.752 [2024-11-19 12:01:37.956444] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:34.752 [2024-11-19 12:01:37.956452] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:34.752 [2024-11-19 12:01:37.956592] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:34.752 pt3 00:09:34.752 12:01:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.752 12:01:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:34.752 12:01:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:34.752 12:01:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:34.752 12:01:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:34.752 12:01:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:34.752 12:01:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:34.752 12:01:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:34.752 12:01:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:34.752 12:01:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.752 12:01:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.752 12:01:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.752 12:01:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.752 12:01:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:34.752 12:01:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.752 12:01:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.752 12:01:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.752 12:01:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.752 12:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.752 "name": "raid_bdev1", 00:09:34.752 "uuid": "acda38b2-dfd5-4668-9695-2096ef95c366", 00:09:34.752 "strip_size_kb": 64, 00:09:34.752 "state": "online", 00:09:34.752 "raid_level": "concat", 00:09:34.752 "superblock": true, 00:09:34.752 "num_base_bdevs": 3, 00:09:34.752 "num_base_bdevs_discovered": 3, 00:09:34.752 "num_base_bdevs_operational": 3, 00:09:34.752 "base_bdevs_list": [ 00:09:34.752 { 00:09:34.752 "name": "pt1", 00:09:34.752 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:34.752 "is_configured": true, 00:09:34.752 "data_offset": 2048, 00:09:34.752 "data_size": 63488 00:09:34.752 }, 00:09:34.752 { 00:09:34.752 "name": "pt2", 00:09:34.752 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:34.752 "is_configured": true, 00:09:34.752 "data_offset": 2048, 00:09:34.752 "data_size": 63488 00:09:34.752 }, 00:09:34.752 { 00:09:34.752 "name": "pt3", 00:09:34.752 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:34.752 "is_configured": true, 00:09:34.752 "data_offset": 2048, 00:09:34.752 "data_size": 63488 00:09:34.752 } 00:09:34.752 ] 00:09:34.752 }' 00:09:34.752 12:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.752 12:01:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.013 12:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:35.013 12:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:35.013 12:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:35.013 12:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:35.013 12:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:35.013 12:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:35.013 12:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:35.013 12:01:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.013 12:01:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.013 12:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:35.013 [2024-11-19 12:01:38.382986] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:35.273 12:01:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.273 12:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:35.273 "name": "raid_bdev1", 00:09:35.273 "aliases": [ 00:09:35.273 "acda38b2-dfd5-4668-9695-2096ef95c366" 00:09:35.273 ], 00:09:35.273 "product_name": "Raid Volume", 00:09:35.273 "block_size": 512, 00:09:35.273 "num_blocks": 190464, 00:09:35.273 "uuid": "acda38b2-dfd5-4668-9695-2096ef95c366", 00:09:35.273 "assigned_rate_limits": { 00:09:35.273 "rw_ios_per_sec": 0, 00:09:35.273 "rw_mbytes_per_sec": 0, 00:09:35.273 "r_mbytes_per_sec": 0, 00:09:35.273 "w_mbytes_per_sec": 0 00:09:35.273 }, 00:09:35.273 "claimed": false, 00:09:35.273 "zoned": false, 00:09:35.273 "supported_io_types": { 00:09:35.273 "read": true, 00:09:35.273 "write": true, 00:09:35.273 "unmap": true, 00:09:35.273 "flush": true, 00:09:35.273 "reset": true, 00:09:35.273 "nvme_admin": false, 00:09:35.273 "nvme_io": false, 00:09:35.273 "nvme_io_md": false, 00:09:35.273 "write_zeroes": true, 00:09:35.273 "zcopy": false, 00:09:35.273 "get_zone_info": false, 00:09:35.273 "zone_management": false, 00:09:35.273 "zone_append": false, 00:09:35.273 "compare": false, 00:09:35.273 "compare_and_write": false, 00:09:35.273 "abort": false, 00:09:35.273 "seek_hole": false, 00:09:35.273 "seek_data": false, 00:09:35.273 "copy": false, 00:09:35.273 "nvme_iov_md": false 00:09:35.273 }, 00:09:35.273 "memory_domains": [ 00:09:35.273 { 00:09:35.273 "dma_device_id": "system", 00:09:35.273 "dma_device_type": 1 00:09:35.273 }, 00:09:35.273 { 00:09:35.273 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.273 "dma_device_type": 2 00:09:35.273 }, 00:09:35.273 { 00:09:35.273 "dma_device_id": "system", 00:09:35.273 "dma_device_type": 1 00:09:35.273 }, 00:09:35.273 { 00:09:35.273 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.273 "dma_device_type": 2 00:09:35.273 }, 00:09:35.273 { 00:09:35.273 "dma_device_id": "system", 00:09:35.273 "dma_device_type": 1 00:09:35.273 }, 00:09:35.273 { 00:09:35.273 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.273 "dma_device_type": 2 00:09:35.273 } 00:09:35.273 ], 00:09:35.273 "driver_specific": { 00:09:35.273 "raid": { 00:09:35.273 "uuid": "acda38b2-dfd5-4668-9695-2096ef95c366", 00:09:35.273 "strip_size_kb": 64, 00:09:35.273 "state": "online", 00:09:35.273 "raid_level": "concat", 00:09:35.273 "superblock": true, 00:09:35.273 "num_base_bdevs": 3, 00:09:35.273 "num_base_bdevs_discovered": 3, 00:09:35.273 "num_base_bdevs_operational": 3, 00:09:35.273 "base_bdevs_list": [ 00:09:35.273 { 00:09:35.273 "name": "pt1", 00:09:35.273 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:35.273 "is_configured": true, 00:09:35.273 "data_offset": 2048, 00:09:35.273 "data_size": 63488 00:09:35.273 }, 00:09:35.273 { 00:09:35.273 "name": "pt2", 00:09:35.273 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:35.273 "is_configured": true, 00:09:35.273 "data_offset": 2048, 00:09:35.273 "data_size": 63488 00:09:35.273 }, 00:09:35.273 { 00:09:35.273 "name": "pt3", 00:09:35.273 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:35.273 "is_configured": true, 00:09:35.273 "data_offset": 2048, 00:09:35.273 "data_size": 63488 00:09:35.273 } 00:09:35.273 ] 00:09:35.273 } 00:09:35.273 } 00:09:35.273 }' 00:09:35.273 12:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:35.273 12:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:35.273 pt2 00:09:35.273 pt3' 00:09:35.273 12:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.273 12:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:35.273 12:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.273 12:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:35.273 12:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.273 12:01:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.273 12:01:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.273 12:01:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.273 12:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:35.273 12:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:35.273 12:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.273 12:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.273 12:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:35.273 12:01:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.273 12:01:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.273 12:01:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.273 12:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:35.273 12:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:35.273 12:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.273 12:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:35.273 12:01:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.273 12:01:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.273 12:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.273 12:01:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.534 12:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:35.534 12:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:35.534 12:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:35.534 12:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:35.534 12:01:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.534 12:01:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.534 [2024-11-19 12:01:38.678392] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:35.534 12:01:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.534 12:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' acda38b2-dfd5-4668-9695-2096ef95c366 '!=' acda38b2-dfd5-4668-9695-2096ef95c366 ']' 00:09:35.534 12:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:09:35.534 12:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:35.534 12:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:35.534 12:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 66898 00:09:35.534 12:01:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 66898 ']' 00:09:35.534 12:01:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 66898 00:09:35.534 12:01:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:35.534 12:01:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:35.534 12:01:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66898 00:09:35.534 12:01:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:35.534 12:01:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:35.534 12:01:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66898' 00:09:35.534 killing process with pid 66898 00:09:35.534 12:01:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 66898 00:09:35.534 [2024-11-19 12:01:38.757868] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:35.534 [2024-11-19 12:01:38.758071] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:35.534 12:01:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 66898 00:09:35.534 [2024-11-19 12:01:38.758175] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:35.534 [2024-11-19 12:01:38.758231] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:35.794 [2024-11-19 12:01:39.057546] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:37.173 12:01:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:37.173 00:09:37.173 real 0m5.223s 00:09:37.173 user 0m7.523s 00:09:37.173 sys 0m0.915s 00:09:37.173 12:01:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:37.173 12:01:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.173 ************************************ 00:09:37.173 END TEST raid_superblock_test 00:09:37.173 ************************************ 00:09:37.173 12:01:40 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:09:37.173 12:01:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:37.173 12:01:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:37.173 12:01:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:37.173 ************************************ 00:09:37.173 START TEST raid_read_error_test 00:09:37.173 ************************************ 00:09:37.173 12:01:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:09:37.173 12:01:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:37.173 12:01:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:37.173 12:01:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:37.173 12:01:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:37.173 12:01:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:37.173 12:01:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:37.173 12:01:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:37.173 12:01:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:37.173 12:01:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:37.173 12:01:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:37.173 12:01:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:37.173 12:01:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:37.173 12:01:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:37.173 12:01:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:37.173 12:01:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:37.173 12:01:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:37.173 12:01:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:37.173 12:01:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:37.173 12:01:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:37.173 12:01:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:37.173 12:01:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:37.173 12:01:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:37.173 12:01:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:37.173 12:01:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:37.173 12:01:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:37.173 12:01:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.U9z1qC2XpF 00:09:37.173 12:01:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67151 00:09:37.173 12:01:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:37.173 12:01:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67151 00:09:37.173 12:01:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 67151 ']' 00:09:37.173 12:01:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:37.173 12:01:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:37.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:37.173 12:01:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:37.173 12:01:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:37.173 12:01:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.173 [2024-11-19 12:01:40.330559] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:09:37.173 [2024-11-19 12:01:40.330709] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67151 ] 00:09:37.173 [2024-11-19 12:01:40.509100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.433 [2024-11-19 12:01:40.629767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.693 [2024-11-19 12:01:40.829803] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:37.693 [2024-11-19 12:01:40.829864] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:37.953 12:01:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:37.953 12:01:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:37.953 12:01:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:37.953 12:01:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:37.953 12:01:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.953 12:01:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.953 BaseBdev1_malloc 00:09:37.953 12:01:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.953 12:01:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:37.953 12:01:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.953 12:01:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.953 true 00:09:37.953 12:01:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.953 12:01:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:37.953 12:01:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.953 12:01:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.953 [2024-11-19 12:01:41.212369] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:37.953 [2024-11-19 12:01:41.212423] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:37.953 [2024-11-19 12:01:41.212441] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:37.953 [2024-11-19 12:01:41.212453] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:37.953 [2024-11-19 12:01:41.214394] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:37.953 [2024-11-19 12:01:41.214429] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:37.953 BaseBdev1 00:09:37.953 12:01:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.953 12:01:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:37.953 12:01:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:37.953 12:01:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.953 12:01:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.953 BaseBdev2_malloc 00:09:37.953 12:01:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.953 12:01:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:37.953 12:01:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.953 12:01:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.953 true 00:09:37.953 12:01:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.953 12:01:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:37.953 12:01:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.953 12:01:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.953 [2024-11-19 12:01:41.265549] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:37.953 [2024-11-19 12:01:41.265601] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:37.953 [2024-11-19 12:01:41.265617] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:37.953 [2024-11-19 12:01:41.265626] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:37.953 [2024-11-19 12:01:41.267584] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:37.953 [2024-11-19 12:01:41.267622] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:37.953 BaseBdev2 00:09:37.953 12:01:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.953 12:01:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:37.953 12:01:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:37.953 12:01:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.953 12:01:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.953 BaseBdev3_malloc 00:09:37.953 12:01:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.953 12:01:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:38.213 12:01:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.213 12:01:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.213 true 00:09:38.213 12:01:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.213 12:01:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:38.213 12:01:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.213 12:01:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.213 [2024-11-19 12:01:41.345199] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:38.213 [2024-11-19 12:01:41.345256] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:38.213 [2024-11-19 12:01:41.345273] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:38.213 [2024-11-19 12:01:41.345284] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:38.213 [2024-11-19 12:01:41.347304] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:38.213 [2024-11-19 12:01:41.347341] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:38.213 BaseBdev3 00:09:38.213 12:01:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.213 12:01:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:38.213 12:01:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.213 12:01:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.213 [2024-11-19 12:01:41.357252] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:38.213 [2024-11-19 12:01:41.358962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:38.213 [2024-11-19 12:01:41.359062] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:38.213 [2024-11-19 12:01:41.359249] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:38.213 [2024-11-19 12:01:41.359269] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:38.213 [2024-11-19 12:01:41.359500] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:38.213 [2024-11-19 12:01:41.359648] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:38.213 [2024-11-19 12:01:41.359668] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:38.213 [2024-11-19 12:01:41.359808] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:38.213 12:01:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.213 12:01:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:38.213 12:01:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:38.213 12:01:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:38.213 12:01:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:38.213 12:01:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:38.213 12:01:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:38.213 12:01:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.213 12:01:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.213 12:01:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.213 12:01:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.213 12:01:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.214 12:01:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.214 12:01:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.214 12:01:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:38.214 12:01:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.214 12:01:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.214 "name": "raid_bdev1", 00:09:38.214 "uuid": "fd2d37d5-ae39-4e1b-90ec-098830848ba9", 00:09:38.214 "strip_size_kb": 64, 00:09:38.214 "state": "online", 00:09:38.214 "raid_level": "concat", 00:09:38.214 "superblock": true, 00:09:38.214 "num_base_bdevs": 3, 00:09:38.214 "num_base_bdevs_discovered": 3, 00:09:38.214 "num_base_bdevs_operational": 3, 00:09:38.214 "base_bdevs_list": [ 00:09:38.214 { 00:09:38.214 "name": "BaseBdev1", 00:09:38.214 "uuid": "a4cfad3c-8987-50fb-9a0a-0abbcb2ae419", 00:09:38.214 "is_configured": true, 00:09:38.214 "data_offset": 2048, 00:09:38.214 "data_size": 63488 00:09:38.214 }, 00:09:38.214 { 00:09:38.214 "name": "BaseBdev2", 00:09:38.214 "uuid": "dacbb089-1b37-5ce4-9d54-5170c5e8e894", 00:09:38.214 "is_configured": true, 00:09:38.214 "data_offset": 2048, 00:09:38.214 "data_size": 63488 00:09:38.214 }, 00:09:38.214 { 00:09:38.214 "name": "BaseBdev3", 00:09:38.214 "uuid": "99dce148-5e5e-5038-afa3-27f0a405a433", 00:09:38.214 "is_configured": true, 00:09:38.214 "data_offset": 2048, 00:09:38.214 "data_size": 63488 00:09:38.214 } 00:09:38.214 ] 00:09:38.214 }' 00:09:38.214 12:01:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.214 12:01:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.473 12:01:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:38.473 12:01:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:38.732 [2024-11-19 12:01:41.881565] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:39.672 12:01:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:39.672 12:01:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.672 12:01:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.672 12:01:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.672 12:01:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:39.672 12:01:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:39.672 12:01:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:39.672 12:01:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:39.672 12:01:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:39.672 12:01:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:39.672 12:01:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:39.672 12:01:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:39.672 12:01:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:39.672 12:01:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.672 12:01:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.672 12:01:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.672 12:01:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.672 12:01:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.672 12:01:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.672 12:01:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:39.672 12:01:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.672 12:01:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.672 12:01:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.672 "name": "raid_bdev1", 00:09:39.672 "uuid": "fd2d37d5-ae39-4e1b-90ec-098830848ba9", 00:09:39.672 "strip_size_kb": 64, 00:09:39.672 "state": "online", 00:09:39.672 "raid_level": "concat", 00:09:39.672 "superblock": true, 00:09:39.672 "num_base_bdevs": 3, 00:09:39.672 "num_base_bdevs_discovered": 3, 00:09:39.672 "num_base_bdevs_operational": 3, 00:09:39.672 "base_bdevs_list": [ 00:09:39.672 { 00:09:39.672 "name": "BaseBdev1", 00:09:39.672 "uuid": "a4cfad3c-8987-50fb-9a0a-0abbcb2ae419", 00:09:39.672 "is_configured": true, 00:09:39.672 "data_offset": 2048, 00:09:39.672 "data_size": 63488 00:09:39.672 }, 00:09:39.672 { 00:09:39.672 "name": "BaseBdev2", 00:09:39.672 "uuid": "dacbb089-1b37-5ce4-9d54-5170c5e8e894", 00:09:39.672 "is_configured": true, 00:09:39.672 "data_offset": 2048, 00:09:39.672 "data_size": 63488 00:09:39.672 }, 00:09:39.672 { 00:09:39.672 "name": "BaseBdev3", 00:09:39.672 "uuid": "99dce148-5e5e-5038-afa3-27f0a405a433", 00:09:39.672 "is_configured": true, 00:09:39.672 "data_offset": 2048, 00:09:39.672 "data_size": 63488 00:09:39.672 } 00:09:39.672 ] 00:09:39.672 }' 00:09:39.672 12:01:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.672 12:01:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.933 12:01:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:39.933 12:01:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.933 12:01:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.933 [2024-11-19 12:01:43.265320] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:39.933 [2024-11-19 12:01:43.265358] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:39.933 [2024-11-19 12:01:43.267861] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:39.933 [2024-11-19 12:01:43.267910] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:39.933 [2024-11-19 12:01:43.267946] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:39.933 [2024-11-19 12:01:43.267959] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:39.933 { 00:09:39.933 "results": [ 00:09:39.933 { 00:09:39.933 "job": "raid_bdev1", 00:09:39.933 "core_mask": "0x1", 00:09:39.933 "workload": "randrw", 00:09:39.933 "percentage": 50, 00:09:39.933 "status": "finished", 00:09:39.933 "queue_depth": 1, 00:09:39.933 "io_size": 131072, 00:09:39.933 "runtime": 1.384673, 00:09:39.933 "iops": 16247.157271066888, 00:09:39.933 "mibps": 2030.894658883361, 00:09:39.933 "io_failed": 1, 00:09:39.933 "io_timeout": 0, 00:09:39.933 "avg_latency_us": 85.61681011140826, 00:09:39.933 "min_latency_us": 25.041048034934498, 00:09:39.933 "max_latency_us": 1438.071615720524 00:09:39.933 } 00:09:39.933 ], 00:09:39.933 "core_count": 1 00:09:39.933 } 00:09:39.933 12:01:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.933 12:01:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67151 00:09:39.933 12:01:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 67151 ']' 00:09:39.933 12:01:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 67151 00:09:39.933 12:01:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:39.933 12:01:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:39.933 12:01:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67151 00:09:40.193 12:01:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:40.193 12:01:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:40.193 killing process with pid 67151 00:09:40.193 12:01:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67151' 00:09:40.193 12:01:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 67151 00:09:40.193 [2024-11-19 12:01:43.311959] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:40.193 12:01:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 67151 00:09:40.193 [2024-11-19 12:01:43.540401] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:41.576 12:01:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:41.576 12:01:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.U9z1qC2XpF 00:09:41.576 12:01:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:41.576 12:01:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:09:41.576 12:01:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:41.576 12:01:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:41.576 12:01:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:41.576 12:01:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:09:41.576 00:09:41.576 real 0m4.458s 00:09:41.576 user 0m5.277s 00:09:41.576 sys 0m0.583s 00:09:41.576 12:01:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:41.576 12:01:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.576 ************************************ 00:09:41.576 END TEST raid_read_error_test 00:09:41.576 ************************************ 00:09:41.576 12:01:44 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:09:41.576 12:01:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:41.576 12:01:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:41.576 12:01:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:41.576 ************************************ 00:09:41.576 START TEST raid_write_error_test 00:09:41.576 ************************************ 00:09:41.576 12:01:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:09:41.576 12:01:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:41.576 12:01:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:41.576 12:01:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:41.576 12:01:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:41.576 12:01:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:41.576 12:01:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:41.576 12:01:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:41.576 12:01:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:41.576 12:01:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:41.576 12:01:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:41.576 12:01:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:41.576 12:01:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:41.576 12:01:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:41.576 12:01:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:41.576 12:01:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:41.576 12:01:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:41.576 12:01:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:41.576 12:01:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:41.576 12:01:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:41.576 12:01:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:41.576 12:01:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:41.576 12:01:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:41.576 12:01:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:41.576 12:01:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:41.576 12:01:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:41.576 12:01:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.mpELGF8HLL 00:09:41.576 12:01:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67291 00:09:41.577 12:01:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67291 00:09:41.577 12:01:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:41.577 12:01:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 67291 ']' 00:09:41.577 12:01:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:41.577 12:01:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:41.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:41.577 12:01:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:41.577 12:01:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:41.577 12:01:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.577 [2024-11-19 12:01:44.855244] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:09:41.577 [2024-11-19 12:01:44.855368] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67291 ] 00:09:41.837 [2024-11-19 12:01:45.036821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.837 [2024-11-19 12:01:45.155203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.097 [2024-11-19 12:01:45.352968] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:42.097 [2024-11-19 12:01:45.353019] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:42.357 12:01:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:42.357 12:01:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:42.357 12:01:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:42.357 12:01:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:42.357 12:01:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.357 12:01:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.357 BaseBdev1_malloc 00:09:42.357 12:01:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.357 12:01:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:42.357 12:01:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.357 12:01:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.357 true 00:09:42.357 12:01:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.357 12:01:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:42.357 12:01:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.357 12:01:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.618 [2024-11-19 12:01:45.736763] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:42.618 [2024-11-19 12:01:45.736820] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:42.618 [2024-11-19 12:01:45.736840] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:42.618 [2024-11-19 12:01:45.736850] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:42.618 [2024-11-19 12:01:45.738940] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:42.618 [2024-11-19 12:01:45.738980] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:42.618 BaseBdev1 00:09:42.618 12:01:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.618 12:01:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:42.618 12:01:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:42.618 12:01:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.618 12:01:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.618 BaseBdev2_malloc 00:09:42.618 12:01:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.618 12:01:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:42.618 12:01:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.618 12:01:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.618 true 00:09:42.618 12:01:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.618 12:01:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:42.618 12:01:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.618 12:01:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.618 [2024-11-19 12:01:45.803740] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:42.618 [2024-11-19 12:01:45.803802] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:42.618 [2024-11-19 12:01:45.803820] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:42.618 [2024-11-19 12:01:45.803831] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:42.618 [2024-11-19 12:01:45.805922] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:42.618 [2024-11-19 12:01:45.805962] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:42.618 BaseBdev2 00:09:42.618 12:01:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.618 12:01:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:42.618 12:01:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:42.618 12:01:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.618 12:01:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.618 BaseBdev3_malloc 00:09:42.618 12:01:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.618 12:01:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:42.618 12:01:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.618 12:01:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.618 true 00:09:42.618 12:01:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.618 12:01:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:42.618 12:01:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.618 12:01:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.618 [2024-11-19 12:01:45.881904] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:42.618 [2024-11-19 12:01:45.881966] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:42.618 [2024-11-19 12:01:45.881986] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:42.618 [2024-11-19 12:01:45.882007] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:42.618 [2024-11-19 12:01:45.884059] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:42.618 [2024-11-19 12:01:45.884093] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:42.618 BaseBdev3 00:09:42.618 12:01:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.618 12:01:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:42.618 12:01:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.618 12:01:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.618 [2024-11-19 12:01:45.893942] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:42.618 [2024-11-19 12:01:45.895684] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:42.618 [2024-11-19 12:01:45.895768] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:42.618 [2024-11-19 12:01:45.895956] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:42.618 [2024-11-19 12:01:45.895975] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:42.618 [2024-11-19 12:01:45.896241] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:42.618 [2024-11-19 12:01:45.896402] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:42.618 [2024-11-19 12:01:45.896422] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:42.618 [2024-11-19 12:01:45.896565] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:42.618 12:01:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.619 12:01:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:42.619 12:01:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:42.619 12:01:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:42.619 12:01:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:42.619 12:01:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:42.619 12:01:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:42.619 12:01:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.619 12:01:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.619 12:01:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.619 12:01:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.619 12:01:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.619 12:01:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.619 12:01:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:42.619 12:01:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.619 12:01:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.619 12:01:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.619 "name": "raid_bdev1", 00:09:42.619 "uuid": "a4d64b11-457a-448a-a875-0789ec2bfd61", 00:09:42.619 "strip_size_kb": 64, 00:09:42.619 "state": "online", 00:09:42.619 "raid_level": "concat", 00:09:42.619 "superblock": true, 00:09:42.619 "num_base_bdevs": 3, 00:09:42.619 "num_base_bdevs_discovered": 3, 00:09:42.619 "num_base_bdevs_operational": 3, 00:09:42.619 "base_bdevs_list": [ 00:09:42.619 { 00:09:42.619 "name": "BaseBdev1", 00:09:42.619 "uuid": "3aec1eb1-027d-5c3c-bcc2-4217fdd6f4e9", 00:09:42.619 "is_configured": true, 00:09:42.619 "data_offset": 2048, 00:09:42.619 "data_size": 63488 00:09:42.619 }, 00:09:42.619 { 00:09:42.619 "name": "BaseBdev2", 00:09:42.619 "uuid": "4809858c-5193-5c6c-972d-c5cf1ae4d009", 00:09:42.619 "is_configured": true, 00:09:42.619 "data_offset": 2048, 00:09:42.619 "data_size": 63488 00:09:42.619 }, 00:09:42.619 { 00:09:42.619 "name": "BaseBdev3", 00:09:42.619 "uuid": "de042139-7a1d-5465-abb0-54072491c538", 00:09:42.619 "is_configured": true, 00:09:42.619 "data_offset": 2048, 00:09:42.619 "data_size": 63488 00:09:42.619 } 00:09:42.619 ] 00:09:42.619 }' 00:09:42.619 12:01:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.619 12:01:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.195 12:01:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:43.195 12:01:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:43.195 [2024-11-19 12:01:46.422416] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:44.144 12:01:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:44.144 12:01:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.144 12:01:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.144 12:01:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.144 12:01:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:44.144 12:01:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:44.144 12:01:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:44.144 12:01:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:44.144 12:01:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:44.144 12:01:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:44.144 12:01:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:44.144 12:01:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:44.144 12:01:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:44.144 12:01:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.144 12:01:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.144 12:01:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.144 12:01:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.144 12:01:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.144 12:01:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:44.144 12:01:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.144 12:01:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.144 12:01:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.144 12:01:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.144 "name": "raid_bdev1", 00:09:44.144 "uuid": "a4d64b11-457a-448a-a875-0789ec2bfd61", 00:09:44.144 "strip_size_kb": 64, 00:09:44.144 "state": "online", 00:09:44.144 "raid_level": "concat", 00:09:44.144 "superblock": true, 00:09:44.144 "num_base_bdevs": 3, 00:09:44.144 "num_base_bdevs_discovered": 3, 00:09:44.144 "num_base_bdevs_operational": 3, 00:09:44.144 "base_bdevs_list": [ 00:09:44.144 { 00:09:44.144 "name": "BaseBdev1", 00:09:44.144 "uuid": "3aec1eb1-027d-5c3c-bcc2-4217fdd6f4e9", 00:09:44.144 "is_configured": true, 00:09:44.144 "data_offset": 2048, 00:09:44.144 "data_size": 63488 00:09:44.144 }, 00:09:44.144 { 00:09:44.144 "name": "BaseBdev2", 00:09:44.144 "uuid": "4809858c-5193-5c6c-972d-c5cf1ae4d009", 00:09:44.144 "is_configured": true, 00:09:44.144 "data_offset": 2048, 00:09:44.144 "data_size": 63488 00:09:44.144 }, 00:09:44.144 { 00:09:44.144 "name": "BaseBdev3", 00:09:44.144 "uuid": "de042139-7a1d-5465-abb0-54072491c538", 00:09:44.144 "is_configured": true, 00:09:44.144 "data_offset": 2048, 00:09:44.144 "data_size": 63488 00:09:44.144 } 00:09:44.144 ] 00:09:44.144 }' 00:09:44.144 12:01:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.144 12:01:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.714 12:01:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:44.714 12:01:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.714 12:01:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.714 [2024-11-19 12:01:47.794115] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:44.714 [2024-11-19 12:01:47.794154] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:44.714 [2024-11-19 12:01:47.796693] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:44.714 [2024-11-19 12:01:47.796741] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:44.714 [2024-11-19 12:01:47.796777] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:44.714 [2024-11-19 12:01:47.796789] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:44.714 { 00:09:44.714 "results": [ 00:09:44.714 { 00:09:44.714 "job": "raid_bdev1", 00:09:44.714 "core_mask": "0x1", 00:09:44.714 "workload": "randrw", 00:09:44.714 "percentage": 50, 00:09:44.714 "status": "finished", 00:09:44.714 "queue_depth": 1, 00:09:44.714 "io_size": 131072, 00:09:44.714 "runtime": 1.372543, 00:09:44.714 "iops": 16231.18547105628, 00:09:44.714 "mibps": 2028.898183882035, 00:09:44.714 "io_failed": 1, 00:09:44.714 "io_timeout": 0, 00:09:44.714 "avg_latency_us": 85.60957621399595, 00:09:44.714 "min_latency_us": 24.929257641921396, 00:09:44.714 "max_latency_us": 1373.6803493449781 00:09:44.714 } 00:09:44.714 ], 00:09:44.714 "core_count": 1 00:09:44.714 } 00:09:44.714 12:01:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.714 12:01:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67291 00:09:44.714 12:01:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 67291 ']' 00:09:44.714 12:01:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 67291 00:09:44.714 12:01:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:44.714 12:01:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:44.714 12:01:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67291 00:09:44.714 12:01:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:44.714 12:01:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:44.714 killing process with pid 67291 00:09:44.714 12:01:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67291' 00:09:44.714 12:01:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 67291 00:09:44.714 [2024-11-19 12:01:47.838924] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:44.714 12:01:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 67291 00:09:44.714 [2024-11-19 12:01:48.072083] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:46.101 12:01:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.mpELGF8HLL 00:09:46.101 12:01:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:46.101 12:01:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:46.101 12:01:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:09:46.101 12:01:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:46.101 12:01:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:46.101 12:01:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:46.101 12:01:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:09:46.101 00:09:46.101 real 0m4.494s 00:09:46.101 user 0m5.316s 00:09:46.101 sys 0m0.575s 00:09:46.101 12:01:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:46.101 12:01:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.101 ************************************ 00:09:46.101 END TEST raid_write_error_test 00:09:46.101 ************************************ 00:09:46.101 12:01:49 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:46.101 12:01:49 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:09:46.101 12:01:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:46.101 12:01:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:46.101 12:01:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:46.101 ************************************ 00:09:46.101 START TEST raid_state_function_test 00:09:46.101 ************************************ 00:09:46.101 12:01:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:09:46.101 12:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:46.101 12:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:46.101 12:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:46.101 12:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:46.101 12:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:46.101 12:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:46.101 12:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:46.101 12:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:46.101 12:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:46.102 12:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:46.102 12:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:46.102 12:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:46.102 12:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:46.102 12:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:46.102 12:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:46.102 12:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:46.102 12:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:46.102 12:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:46.102 12:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:46.102 12:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:46.102 12:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:46.102 12:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:46.102 12:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:46.102 12:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:46.102 12:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:46.102 12:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67435 00:09:46.102 12:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:46.102 12:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67435' 00:09:46.102 Process raid pid: 67435 00:09:46.102 12:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67435 00:09:46.102 12:01:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 67435 ']' 00:09:46.102 12:01:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:46.102 12:01:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:46.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:46.102 12:01:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:46.102 12:01:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:46.102 12:01:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.102 [2024-11-19 12:01:49.417664] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:09:46.102 [2024-11-19 12:01:49.417789] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:46.364 [2024-11-19 12:01:49.578831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.364 [2024-11-19 12:01:49.698083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.623 [2024-11-19 12:01:49.902060] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:46.623 [2024-11-19 12:01:49.902095] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:46.883 12:01:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:46.883 12:01:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:46.883 12:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:46.883 12:01:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.883 12:01:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.883 [2024-11-19 12:01:50.234964] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:46.883 [2024-11-19 12:01:50.235039] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:46.883 [2024-11-19 12:01:50.235050] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:46.883 [2024-11-19 12:01:50.235061] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:46.883 [2024-11-19 12:01:50.235067] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:46.883 [2024-11-19 12:01:50.235076] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:46.883 12:01:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.883 12:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:46.883 12:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:46.883 12:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:46.883 12:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:46.883 12:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:46.883 12:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:46.883 12:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.883 12:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.883 12:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.883 12:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.883 12:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.883 12:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:46.883 12:01:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.883 12:01:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.143 12:01:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.143 12:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.143 "name": "Existed_Raid", 00:09:47.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.143 "strip_size_kb": 0, 00:09:47.143 "state": "configuring", 00:09:47.143 "raid_level": "raid1", 00:09:47.143 "superblock": false, 00:09:47.143 "num_base_bdevs": 3, 00:09:47.143 "num_base_bdevs_discovered": 0, 00:09:47.143 "num_base_bdevs_operational": 3, 00:09:47.143 "base_bdevs_list": [ 00:09:47.143 { 00:09:47.143 "name": "BaseBdev1", 00:09:47.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.143 "is_configured": false, 00:09:47.143 "data_offset": 0, 00:09:47.143 "data_size": 0 00:09:47.143 }, 00:09:47.143 { 00:09:47.143 "name": "BaseBdev2", 00:09:47.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.143 "is_configured": false, 00:09:47.143 "data_offset": 0, 00:09:47.143 "data_size": 0 00:09:47.143 }, 00:09:47.143 { 00:09:47.143 "name": "BaseBdev3", 00:09:47.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.143 "is_configured": false, 00:09:47.143 "data_offset": 0, 00:09:47.143 "data_size": 0 00:09:47.143 } 00:09:47.143 ] 00:09:47.143 }' 00:09:47.143 12:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.143 12:01:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.404 12:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:47.404 12:01:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.404 12:01:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.404 [2024-11-19 12:01:50.686155] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:47.404 [2024-11-19 12:01:50.686199] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:47.404 12:01:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.404 12:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:47.404 12:01:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.404 12:01:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.404 [2024-11-19 12:01:50.694131] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:47.404 [2024-11-19 12:01:50.694174] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:47.404 [2024-11-19 12:01:50.694184] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:47.404 [2024-11-19 12:01:50.694193] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:47.404 [2024-11-19 12:01:50.694199] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:47.404 [2024-11-19 12:01:50.694207] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:47.404 12:01:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.404 12:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:47.404 12:01:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.404 12:01:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.404 [2024-11-19 12:01:50.736655] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:47.404 BaseBdev1 00:09:47.404 12:01:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.404 12:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:47.404 12:01:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:47.404 12:01:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:47.404 12:01:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:47.404 12:01:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:47.404 12:01:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:47.404 12:01:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:47.404 12:01:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.404 12:01:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.404 12:01:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.404 12:01:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:47.404 12:01:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.404 12:01:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.404 [ 00:09:47.404 { 00:09:47.404 "name": "BaseBdev1", 00:09:47.404 "aliases": [ 00:09:47.404 "cecbb590-0bef-4e0b-b98a-ad8a1fd20c42" 00:09:47.404 ], 00:09:47.404 "product_name": "Malloc disk", 00:09:47.404 "block_size": 512, 00:09:47.404 "num_blocks": 65536, 00:09:47.404 "uuid": "cecbb590-0bef-4e0b-b98a-ad8a1fd20c42", 00:09:47.404 "assigned_rate_limits": { 00:09:47.404 "rw_ios_per_sec": 0, 00:09:47.404 "rw_mbytes_per_sec": 0, 00:09:47.404 "r_mbytes_per_sec": 0, 00:09:47.404 "w_mbytes_per_sec": 0 00:09:47.404 }, 00:09:47.405 "claimed": true, 00:09:47.405 "claim_type": "exclusive_write", 00:09:47.405 "zoned": false, 00:09:47.405 "supported_io_types": { 00:09:47.405 "read": true, 00:09:47.405 "write": true, 00:09:47.405 "unmap": true, 00:09:47.405 "flush": true, 00:09:47.405 "reset": true, 00:09:47.405 "nvme_admin": false, 00:09:47.405 "nvme_io": false, 00:09:47.405 "nvme_io_md": false, 00:09:47.405 "write_zeroes": true, 00:09:47.405 "zcopy": true, 00:09:47.405 "get_zone_info": false, 00:09:47.405 "zone_management": false, 00:09:47.405 "zone_append": false, 00:09:47.405 "compare": false, 00:09:47.405 "compare_and_write": false, 00:09:47.405 "abort": true, 00:09:47.405 "seek_hole": false, 00:09:47.405 "seek_data": false, 00:09:47.405 "copy": true, 00:09:47.405 "nvme_iov_md": false 00:09:47.405 }, 00:09:47.405 "memory_domains": [ 00:09:47.405 { 00:09:47.405 "dma_device_id": "system", 00:09:47.405 "dma_device_type": 1 00:09:47.405 }, 00:09:47.405 { 00:09:47.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.405 "dma_device_type": 2 00:09:47.405 } 00:09:47.405 ], 00:09:47.405 "driver_specific": {} 00:09:47.405 } 00:09:47.405 ] 00:09:47.405 12:01:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.405 12:01:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:47.405 12:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:47.405 12:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:47.405 12:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:47.405 12:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:47.405 12:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:47.405 12:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:47.405 12:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.405 12:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.405 12:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.405 12:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.665 12:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.665 12:01:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.665 12:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:47.665 12:01:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.666 12:01:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.666 12:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.666 "name": "Existed_Raid", 00:09:47.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.666 "strip_size_kb": 0, 00:09:47.666 "state": "configuring", 00:09:47.666 "raid_level": "raid1", 00:09:47.666 "superblock": false, 00:09:47.666 "num_base_bdevs": 3, 00:09:47.666 "num_base_bdevs_discovered": 1, 00:09:47.666 "num_base_bdevs_operational": 3, 00:09:47.666 "base_bdevs_list": [ 00:09:47.666 { 00:09:47.666 "name": "BaseBdev1", 00:09:47.666 "uuid": "cecbb590-0bef-4e0b-b98a-ad8a1fd20c42", 00:09:47.666 "is_configured": true, 00:09:47.666 "data_offset": 0, 00:09:47.666 "data_size": 65536 00:09:47.666 }, 00:09:47.666 { 00:09:47.666 "name": "BaseBdev2", 00:09:47.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.666 "is_configured": false, 00:09:47.666 "data_offset": 0, 00:09:47.666 "data_size": 0 00:09:47.666 }, 00:09:47.666 { 00:09:47.666 "name": "BaseBdev3", 00:09:47.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.666 "is_configured": false, 00:09:47.666 "data_offset": 0, 00:09:47.666 "data_size": 0 00:09:47.666 } 00:09:47.666 ] 00:09:47.666 }' 00:09:47.666 12:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.666 12:01:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.926 12:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:47.926 12:01:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.926 12:01:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.926 [2024-11-19 12:01:51.215900] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:47.926 [2024-11-19 12:01:51.215969] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:47.926 12:01:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.926 12:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:47.926 12:01:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.926 12:01:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.926 [2024-11-19 12:01:51.227945] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:47.926 [2024-11-19 12:01:51.229802] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:47.926 [2024-11-19 12:01:51.229845] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:47.926 [2024-11-19 12:01:51.229855] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:47.926 [2024-11-19 12:01:51.229864] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:47.926 12:01:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.926 12:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:47.926 12:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:47.926 12:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:47.926 12:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:47.926 12:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:47.926 12:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:47.926 12:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:47.926 12:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:47.926 12:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.926 12:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.926 12:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.926 12:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.926 12:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.926 12:01:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.926 12:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:47.926 12:01:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.926 12:01:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.926 12:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.926 "name": "Existed_Raid", 00:09:47.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.926 "strip_size_kb": 0, 00:09:47.926 "state": "configuring", 00:09:47.926 "raid_level": "raid1", 00:09:47.926 "superblock": false, 00:09:47.926 "num_base_bdevs": 3, 00:09:47.926 "num_base_bdevs_discovered": 1, 00:09:47.926 "num_base_bdevs_operational": 3, 00:09:47.926 "base_bdevs_list": [ 00:09:47.926 { 00:09:47.926 "name": "BaseBdev1", 00:09:47.926 "uuid": "cecbb590-0bef-4e0b-b98a-ad8a1fd20c42", 00:09:47.926 "is_configured": true, 00:09:47.926 "data_offset": 0, 00:09:47.926 "data_size": 65536 00:09:47.926 }, 00:09:47.926 { 00:09:47.926 "name": "BaseBdev2", 00:09:47.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.926 "is_configured": false, 00:09:47.926 "data_offset": 0, 00:09:47.926 "data_size": 0 00:09:47.926 }, 00:09:47.926 { 00:09:47.926 "name": "BaseBdev3", 00:09:47.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.926 "is_configured": false, 00:09:47.926 "data_offset": 0, 00:09:47.926 "data_size": 0 00:09:47.926 } 00:09:47.926 ] 00:09:47.926 }' 00:09:47.926 12:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.926 12:01:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.496 12:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:48.496 12:01:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.496 12:01:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.496 [2024-11-19 12:01:51.732346] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:48.496 BaseBdev2 00:09:48.496 12:01:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.496 12:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:48.496 12:01:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:48.496 12:01:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:48.496 12:01:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:48.496 12:01:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:48.496 12:01:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:48.496 12:01:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:48.496 12:01:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.496 12:01:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.496 12:01:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.496 12:01:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:48.496 12:01:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.496 12:01:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.496 [ 00:09:48.496 { 00:09:48.496 "name": "BaseBdev2", 00:09:48.496 "aliases": [ 00:09:48.496 "42be9e23-e444-481a-b8fb-5069d1b7a7c3" 00:09:48.496 ], 00:09:48.497 "product_name": "Malloc disk", 00:09:48.497 "block_size": 512, 00:09:48.497 "num_blocks": 65536, 00:09:48.497 "uuid": "42be9e23-e444-481a-b8fb-5069d1b7a7c3", 00:09:48.497 "assigned_rate_limits": { 00:09:48.497 "rw_ios_per_sec": 0, 00:09:48.497 "rw_mbytes_per_sec": 0, 00:09:48.497 "r_mbytes_per_sec": 0, 00:09:48.497 "w_mbytes_per_sec": 0 00:09:48.497 }, 00:09:48.497 "claimed": true, 00:09:48.497 "claim_type": "exclusive_write", 00:09:48.497 "zoned": false, 00:09:48.497 "supported_io_types": { 00:09:48.497 "read": true, 00:09:48.497 "write": true, 00:09:48.497 "unmap": true, 00:09:48.497 "flush": true, 00:09:48.497 "reset": true, 00:09:48.497 "nvme_admin": false, 00:09:48.497 "nvme_io": false, 00:09:48.497 "nvme_io_md": false, 00:09:48.497 "write_zeroes": true, 00:09:48.497 "zcopy": true, 00:09:48.497 "get_zone_info": false, 00:09:48.497 "zone_management": false, 00:09:48.497 "zone_append": false, 00:09:48.497 "compare": false, 00:09:48.497 "compare_and_write": false, 00:09:48.497 "abort": true, 00:09:48.497 "seek_hole": false, 00:09:48.497 "seek_data": false, 00:09:48.497 "copy": true, 00:09:48.497 "nvme_iov_md": false 00:09:48.497 }, 00:09:48.497 "memory_domains": [ 00:09:48.497 { 00:09:48.497 "dma_device_id": "system", 00:09:48.497 "dma_device_type": 1 00:09:48.497 }, 00:09:48.497 { 00:09:48.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.497 "dma_device_type": 2 00:09:48.497 } 00:09:48.497 ], 00:09:48.497 "driver_specific": {} 00:09:48.497 } 00:09:48.497 ] 00:09:48.497 12:01:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.497 12:01:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:48.497 12:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:48.497 12:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:48.497 12:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:48.497 12:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:48.497 12:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:48.497 12:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:48.497 12:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:48.497 12:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:48.497 12:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.497 12:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.497 12:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.497 12:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.497 12:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:48.497 12:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.497 12:01:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.497 12:01:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.497 12:01:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.497 12:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.497 "name": "Existed_Raid", 00:09:48.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.497 "strip_size_kb": 0, 00:09:48.497 "state": "configuring", 00:09:48.497 "raid_level": "raid1", 00:09:48.497 "superblock": false, 00:09:48.497 "num_base_bdevs": 3, 00:09:48.497 "num_base_bdevs_discovered": 2, 00:09:48.497 "num_base_bdevs_operational": 3, 00:09:48.497 "base_bdevs_list": [ 00:09:48.497 { 00:09:48.497 "name": "BaseBdev1", 00:09:48.497 "uuid": "cecbb590-0bef-4e0b-b98a-ad8a1fd20c42", 00:09:48.497 "is_configured": true, 00:09:48.497 "data_offset": 0, 00:09:48.497 "data_size": 65536 00:09:48.497 }, 00:09:48.497 { 00:09:48.497 "name": "BaseBdev2", 00:09:48.497 "uuid": "42be9e23-e444-481a-b8fb-5069d1b7a7c3", 00:09:48.497 "is_configured": true, 00:09:48.497 "data_offset": 0, 00:09:48.497 "data_size": 65536 00:09:48.497 }, 00:09:48.497 { 00:09:48.497 "name": "BaseBdev3", 00:09:48.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.497 "is_configured": false, 00:09:48.497 "data_offset": 0, 00:09:48.497 "data_size": 0 00:09:48.497 } 00:09:48.497 ] 00:09:48.497 }' 00:09:48.497 12:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.497 12:01:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.085 12:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:49.085 12:01:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.085 12:01:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.085 [2024-11-19 12:01:52.255533] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:49.085 [2024-11-19 12:01:52.255588] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:49.085 [2024-11-19 12:01:52.255601] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:49.085 [2024-11-19 12:01:52.255881] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:49.085 [2024-11-19 12:01:52.256076] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:49.085 [2024-11-19 12:01:52.256094] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:49.085 [2024-11-19 12:01:52.256365] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:49.085 BaseBdev3 00:09:49.085 12:01:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.085 12:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:49.085 12:01:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:49.085 12:01:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:49.085 12:01:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:49.085 12:01:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:49.085 12:01:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:49.085 12:01:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:49.085 12:01:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.085 12:01:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.085 12:01:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.085 12:01:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:49.085 12:01:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.085 12:01:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.085 [ 00:09:49.085 { 00:09:49.085 "name": "BaseBdev3", 00:09:49.085 "aliases": [ 00:09:49.085 "53332b13-27d1-4e5c-9298-e0b57d841b0c" 00:09:49.085 ], 00:09:49.085 "product_name": "Malloc disk", 00:09:49.085 "block_size": 512, 00:09:49.085 "num_blocks": 65536, 00:09:49.085 "uuid": "53332b13-27d1-4e5c-9298-e0b57d841b0c", 00:09:49.085 "assigned_rate_limits": { 00:09:49.085 "rw_ios_per_sec": 0, 00:09:49.085 "rw_mbytes_per_sec": 0, 00:09:49.085 "r_mbytes_per_sec": 0, 00:09:49.085 "w_mbytes_per_sec": 0 00:09:49.085 }, 00:09:49.085 "claimed": true, 00:09:49.085 "claim_type": "exclusive_write", 00:09:49.085 "zoned": false, 00:09:49.085 "supported_io_types": { 00:09:49.085 "read": true, 00:09:49.085 "write": true, 00:09:49.085 "unmap": true, 00:09:49.085 "flush": true, 00:09:49.085 "reset": true, 00:09:49.085 "nvme_admin": false, 00:09:49.085 "nvme_io": false, 00:09:49.085 "nvme_io_md": false, 00:09:49.085 "write_zeroes": true, 00:09:49.085 "zcopy": true, 00:09:49.085 "get_zone_info": false, 00:09:49.085 "zone_management": false, 00:09:49.085 "zone_append": false, 00:09:49.085 "compare": false, 00:09:49.085 "compare_and_write": false, 00:09:49.085 "abort": true, 00:09:49.085 "seek_hole": false, 00:09:49.085 "seek_data": false, 00:09:49.085 "copy": true, 00:09:49.085 "nvme_iov_md": false 00:09:49.085 }, 00:09:49.085 "memory_domains": [ 00:09:49.085 { 00:09:49.085 "dma_device_id": "system", 00:09:49.085 "dma_device_type": 1 00:09:49.085 }, 00:09:49.085 { 00:09:49.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.085 "dma_device_type": 2 00:09:49.085 } 00:09:49.085 ], 00:09:49.085 "driver_specific": {} 00:09:49.085 } 00:09:49.085 ] 00:09:49.085 12:01:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.085 12:01:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:49.085 12:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:49.085 12:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:49.085 12:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:49.085 12:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:49.085 12:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:49.085 12:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:49.085 12:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:49.085 12:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:49.085 12:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.085 12:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.085 12:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.086 12:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.086 12:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.086 12:01:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.086 12:01:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.086 12:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:49.086 12:01:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.086 12:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.086 "name": "Existed_Raid", 00:09:49.086 "uuid": "a99fd8bb-fdc1-4396-b40a-234e5728e602", 00:09:49.086 "strip_size_kb": 0, 00:09:49.086 "state": "online", 00:09:49.086 "raid_level": "raid1", 00:09:49.086 "superblock": false, 00:09:49.086 "num_base_bdevs": 3, 00:09:49.086 "num_base_bdevs_discovered": 3, 00:09:49.086 "num_base_bdevs_operational": 3, 00:09:49.086 "base_bdevs_list": [ 00:09:49.086 { 00:09:49.086 "name": "BaseBdev1", 00:09:49.086 "uuid": "cecbb590-0bef-4e0b-b98a-ad8a1fd20c42", 00:09:49.086 "is_configured": true, 00:09:49.086 "data_offset": 0, 00:09:49.086 "data_size": 65536 00:09:49.086 }, 00:09:49.086 { 00:09:49.086 "name": "BaseBdev2", 00:09:49.086 "uuid": "42be9e23-e444-481a-b8fb-5069d1b7a7c3", 00:09:49.086 "is_configured": true, 00:09:49.086 "data_offset": 0, 00:09:49.086 "data_size": 65536 00:09:49.086 }, 00:09:49.086 { 00:09:49.086 "name": "BaseBdev3", 00:09:49.086 "uuid": "53332b13-27d1-4e5c-9298-e0b57d841b0c", 00:09:49.086 "is_configured": true, 00:09:49.086 "data_offset": 0, 00:09:49.086 "data_size": 65536 00:09:49.086 } 00:09:49.086 ] 00:09:49.086 }' 00:09:49.086 12:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.086 12:01:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.656 12:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:49.656 12:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:49.656 12:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:49.656 12:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:49.656 12:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:49.656 12:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:49.656 12:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:49.656 12:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:49.656 12:01:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.656 12:01:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.656 [2024-11-19 12:01:52.759223] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:49.656 12:01:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.656 12:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:49.656 "name": "Existed_Raid", 00:09:49.656 "aliases": [ 00:09:49.656 "a99fd8bb-fdc1-4396-b40a-234e5728e602" 00:09:49.656 ], 00:09:49.656 "product_name": "Raid Volume", 00:09:49.656 "block_size": 512, 00:09:49.656 "num_blocks": 65536, 00:09:49.656 "uuid": "a99fd8bb-fdc1-4396-b40a-234e5728e602", 00:09:49.656 "assigned_rate_limits": { 00:09:49.656 "rw_ios_per_sec": 0, 00:09:49.656 "rw_mbytes_per_sec": 0, 00:09:49.656 "r_mbytes_per_sec": 0, 00:09:49.656 "w_mbytes_per_sec": 0 00:09:49.656 }, 00:09:49.656 "claimed": false, 00:09:49.656 "zoned": false, 00:09:49.656 "supported_io_types": { 00:09:49.656 "read": true, 00:09:49.656 "write": true, 00:09:49.656 "unmap": false, 00:09:49.656 "flush": false, 00:09:49.656 "reset": true, 00:09:49.656 "nvme_admin": false, 00:09:49.656 "nvme_io": false, 00:09:49.656 "nvme_io_md": false, 00:09:49.656 "write_zeroes": true, 00:09:49.656 "zcopy": false, 00:09:49.656 "get_zone_info": false, 00:09:49.656 "zone_management": false, 00:09:49.656 "zone_append": false, 00:09:49.656 "compare": false, 00:09:49.656 "compare_and_write": false, 00:09:49.656 "abort": false, 00:09:49.656 "seek_hole": false, 00:09:49.656 "seek_data": false, 00:09:49.656 "copy": false, 00:09:49.656 "nvme_iov_md": false 00:09:49.656 }, 00:09:49.656 "memory_domains": [ 00:09:49.656 { 00:09:49.656 "dma_device_id": "system", 00:09:49.656 "dma_device_type": 1 00:09:49.656 }, 00:09:49.656 { 00:09:49.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.657 "dma_device_type": 2 00:09:49.657 }, 00:09:49.657 { 00:09:49.657 "dma_device_id": "system", 00:09:49.657 "dma_device_type": 1 00:09:49.657 }, 00:09:49.657 { 00:09:49.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.657 "dma_device_type": 2 00:09:49.657 }, 00:09:49.657 { 00:09:49.657 "dma_device_id": "system", 00:09:49.657 "dma_device_type": 1 00:09:49.657 }, 00:09:49.657 { 00:09:49.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.657 "dma_device_type": 2 00:09:49.657 } 00:09:49.657 ], 00:09:49.657 "driver_specific": { 00:09:49.657 "raid": { 00:09:49.657 "uuid": "a99fd8bb-fdc1-4396-b40a-234e5728e602", 00:09:49.657 "strip_size_kb": 0, 00:09:49.657 "state": "online", 00:09:49.657 "raid_level": "raid1", 00:09:49.657 "superblock": false, 00:09:49.657 "num_base_bdevs": 3, 00:09:49.657 "num_base_bdevs_discovered": 3, 00:09:49.657 "num_base_bdevs_operational": 3, 00:09:49.657 "base_bdevs_list": [ 00:09:49.657 { 00:09:49.657 "name": "BaseBdev1", 00:09:49.657 "uuid": "cecbb590-0bef-4e0b-b98a-ad8a1fd20c42", 00:09:49.657 "is_configured": true, 00:09:49.657 "data_offset": 0, 00:09:49.657 "data_size": 65536 00:09:49.657 }, 00:09:49.657 { 00:09:49.657 "name": "BaseBdev2", 00:09:49.657 "uuid": "42be9e23-e444-481a-b8fb-5069d1b7a7c3", 00:09:49.657 "is_configured": true, 00:09:49.657 "data_offset": 0, 00:09:49.657 "data_size": 65536 00:09:49.657 }, 00:09:49.657 { 00:09:49.657 "name": "BaseBdev3", 00:09:49.657 "uuid": "53332b13-27d1-4e5c-9298-e0b57d841b0c", 00:09:49.657 "is_configured": true, 00:09:49.657 "data_offset": 0, 00:09:49.657 "data_size": 65536 00:09:49.657 } 00:09:49.657 ] 00:09:49.657 } 00:09:49.657 } 00:09:49.657 }' 00:09:49.657 12:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:49.657 12:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:49.657 BaseBdev2 00:09:49.657 BaseBdev3' 00:09:49.657 12:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:49.657 12:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:49.657 12:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:49.657 12:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:49.657 12:01:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.657 12:01:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.657 12:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:49.657 12:01:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.657 12:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:49.657 12:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:49.657 12:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:49.657 12:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:49.657 12:01:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.657 12:01:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.657 12:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:49.657 12:01:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.657 12:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:49.657 12:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:49.657 12:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:49.657 12:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:49.657 12:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:49.657 12:01:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.657 12:01:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.657 12:01:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.917 12:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:49.917 12:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:49.917 12:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:49.917 12:01:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.917 12:01:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.917 [2024-11-19 12:01:53.038463] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:49.917 12:01:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.917 12:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:49.917 12:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:49.917 12:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:49.917 12:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:49.917 12:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:49.917 12:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:49.917 12:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:49.917 12:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:49.917 12:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:49.917 12:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:49.917 12:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:49.917 12:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.917 12:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.917 12:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.917 12:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.917 12:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.917 12:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:49.917 12:01:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.917 12:01:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.917 12:01:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.917 12:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.917 "name": "Existed_Raid", 00:09:49.917 "uuid": "a99fd8bb-fdc1-4396-b40a-234e5728e602", 00:09:49.917 "strip_size_kb": 0, 00:09:49.917 "state": "online", 00:09:49.917 "raid_level": "raid1", 00:09:49.917 "superblock": false, 00:09:49.917 "num_base_bdevs": 3, 00:09:49.917 "num_base_bdevs_discovered": 2, 00:09:49.917 "num_base_bdevs_operational": 2, 00:09:49.917 "base_bdevs_list": [ 00:09:49.917 { 00:09:49.917 "name": null, 00:09:49.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:49.917 "is_configured": false, 00:09:49.917 "data_offset": 0, 00:09:49.917 "data_size": 65536 00:09:49.917 }, 00:09:49.917 { 00:09:49.917 "name": "BaseBdev2", 00:09:49.917 "uuid": "42be9e23-e444-481a-b8fb-5069d1b7a7c3", 00:09:49.917 "is_configured": true, 00:09:49.917 "data_offset": 0, 00:09:49.917 "data_size": 65536 00:09:49.917 }, 00:09:49.917 { 00:09:49.917 "name": "BaseBdev3", 00:09:49.917 "uuid": "53332b13-27d1-4e5c-9298-e0b57d841b0c", 00:09:49.917 "is_configured": true, 00:09:49.917 "data_offset": 0, 00:09:49.917 "data_size": 65536 00:09:49.917 } 00:09:49.917 ] 00:09:49.917 }' 00:09:49.917 12:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.917 12:01:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.177 12:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:50.177 12:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:50.177 12:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.177 12:01:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.177 12:01:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.177 12:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:50.437 12:01:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.437 12:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:50.437 12:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:50.437 12:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:50.437 12:01:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.437 12:01:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.437 [2024-11-19 12:01:53.563862] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:50.437 12:01:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.437 12:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:50.437 12:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:50.437 12:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.437 12:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:50.437 12:01:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.437 12:01:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.437 12:01:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.437 12:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:50.437 12:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:50.437 12:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:50.437 12:01:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.437 12:01:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.437 [2024-11-19 12:01:53.718369] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:50.437 [2024-11-19 12:01:53.718482] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:50.697 [2024-11-19 12:01:53.813063] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:50.697 [2024-11-19 12:01:53.813133] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:50.697 [2024-11-19 12:01:53.813145] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:50.697 12:01:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.697 12:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:50.697 12:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:50.698 12:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:50.698 12:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.698 12:01:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.698 12:01:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.698 12:01:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.698 12:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:50.698 12:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:50.698 12:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:50.698 12:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:50.698 12:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:50.698 12:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:50.698 12:01:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.698 12:01:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.698 BaseBdev2 00:09:50.698 12:01:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.698 12:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:50.698 12:01:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:50.698 12:01:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:50.698 12:01:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:50.698 12:01:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:50.698 12:01:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:50.698 12:01:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:50.698 12:01:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.698 12:01:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.698 12:01:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.698 12:01:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:50.698 12:01:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.698 12:01:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.698 [ 00:09:50.698 { 00:09:50.698 "name": "BaseBdev2", 00:09:50.698 "aliases": [ 00:09:50.698 "a2f3e7bd-dc7f-4c72-87fb-f9ae0bb8a920" 00:09:50.698 ], 00:09:50.698 "product_name": "Malloc disk", 00:09:50.698 "block_size": 512, 00:09:50.698 "num_blocks": 65536, 00:09:50.698 "uuid": "a2f3e7bd-dc7f-4c72-87fb-f9ae0bb8a920", 00:09:50.698 "assigned_rate_limits": { 00:09:50.698 "rw_ios_per_sec": 0, 00:09:50.698 "rw_mbytes_per_sec": 0, 00:09:50.698 "r_mbytes_per_sec": 0, 00:09:50.698 "w_mbytes_per_sec": 0 00:09:50.698 }, 00:09:50.698 "claimed": false, 00:09:50.698 "zoned": false, 00:09:50.698 "supported_io_types": { 00:09:50.698 "read": true, 00:09:50.698 "write": true, 00:09:50.698 "unmap": true, 00:09:50.698 "flush": true, 00:09:50.698 "reset": true, 00:09:50.698 "nvme_admin": false, 00:09:50.698 "nvme_io": false, 00:09:50.698 "nvme_io_md": false, 00:09:50.698 "write_zeroes": true, 00:09:50.698 "zcopy": true, 00:09:50.698 "get_zone_info": false, 00:09:50.698 "zone_management": false, 00:09:50.698 "zone_append": false, 00:09:50.698 "compare": false, 00:09:50.698 "compare_and_write": false, 00:09:50.698 "abort": true, 00:09:50.698 "seek_hole": false, 00:09:50.698 "seek_data": false, 00:09:50.698 "copy": true, 00:09:50.698 "nvme_iov_md": false 00:09:50.698 }, 00:09:50.698 "memory_domains": [ 00:09:50.698 { 00:09:50.698 "dma_device_id": "system", 00:09:50.698 "dma_device_type": 1 00:09:50.698 }, 00:09:50.698 { 00:09:50.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.698 "dma_device_type": 2 00:09:50.698 } 00:09:50.698 ], 00:09:50.698 "driver_specific": {} 00:09:50.698 } 00:09:50.698 ] 00:09:50.698 12:01:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.698 12:01:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:50.698 12:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:50.698 12:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:50.698 12:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:50.698 12:01:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.698 12:01:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.698 BaseBdev3 00:09:50.698 12:01:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.698 12:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:50.698 12:01:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:50.698 12:01:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:50.698 12:01:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:50.698 12:01:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:50.698 12:01:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:50.698 12:01:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:50.698 12:01:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.698 12:01:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.698 12:01:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.698 12:01:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:50.698 12:01:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.698 12:01:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.698 [ 00:09:50.698 { 00:09:50.698 "name": "BaseBdev3", 00:09:50.698 "aliases": [ 00:09:50.698 "95c1befe-587a-427e-8d90-28ccb232febc" 00:09:50.698 ], 00:09:50.698 "product_name": "Malloc disk", 00:09:50.698 "block_size": 512, 00:09:50.698 "num_blocks": 65536, 00:09:50.698 "uuid": "95c1befe-587a-427e-8d90-28ccb232febc", 00:09:50.698 "assigned_rate_limits": { 00:09:50.698 "rw_ios_per_sec": 0, 00:09:50.698 "rw_mbytes_per_sec": 0, 00:09:50.698 "r_mbytes_per_sec": 0, 00:09:50.698 "w_mbytes_per_sec": 0 00:09:50.698 }, 00:09:50.698 "claimed": false, 00:09:50.698 "zoned": false, 00:09:50.698 "supported_io_types": { 00:09:50.698 "read": true, 00:09:50.698 "write": true, 00:09:50.698 "unmap": true, 00:09:50.698 "flush": true, 00:09:50.698 "reset": true, 00:09:50.698 "nvme_admin": false, 00:09:50.698 "nvme_io": false, 00:09:50.698 "nvme_io_md": false, 00:09:50.698 "write_zeroes": true, 00:09:50.698 "zcopy": true, 00:09:50.698 "get_zone_info": false, 00:09:50.698 "zone_management": false, 00:09:50.698 "zone_append": false, 00:09:50.698 "compare": false, 00:09:50.698 "compare_and_write": false, 00:09:50.698 "abort": true, 00:09:50.698 "seek_hole": false, 00:09:50.698 "seek_data": false, 00:09:50.698 "copy": true, 00:09:50.698 "nvme_iov_md": false 00:09:50.698 }, 00:09:50.698 "memory_domains": [ 00:09:50.698 { 00:09:50.698 "dma_device_id": "system", 00:09:50.698 "dma_device_type": 1 00:09:50.698 }, 00:09:50.698 { 00:09:50.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.698 "dma_device_type": 2 00:09:50.698 } 00:09:50.698 ], 00:09:50.698 "driver_specific": {} 00:09:50.698 } 00:09:50.698 ] 00:09:50.698 12:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.698 12:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:50.698 12:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:50.698 12:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:50.698 12:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:50.698 12:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.698 12:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.698 [2024-11-19 12:01:54.011664] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:50.698 [2024-11-19 12:01:54.011806] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:50.698 [2024-11-19 12:01:54.011828] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:50.698 [2024-11-19 12:01:54.013559] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:50.698 12:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.698 12:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:50.698 12:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:50.698 12:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:50.698 12:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:50.698 12:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:50.698 12:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:50.698 12:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.699 12:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.699 12:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.699 12:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.699 12:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:50.699 12:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.699 12:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.699 12:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.699 12:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.699 12:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.699 "name": "Existed_Raid", 00:09:50.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.699 "strip_size_kb": 0, 00:09:50.699 "state": "configuring", 00:09:50.699 "raid_level": "raid1", 00:09:50.699 "superblock": false, 00:09:50.699 "num_base_bdevs": 3, 00:09:50.699 "num_base_bdevs_discovered": 2, 00:09:50.699 "num_base_bdevs_operational": 3, 00:09:50.699 "base_bdevs_list": [ 00:09:50.699 { 00:09:50.699 "name": "BaseBdev1", 00:09:50.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.699 "is_configured": false, 00:09:50.699 "data_offset": 0, 00:09:50.699 "data_size": 0 00:09:50.699 }, 00:09:50.699 { 00:09:50.699 "name": "BaseBdev2", 00:09:50.699 "uuid": "a2f3e7bd-dc7f-4c72-87fb-f9ae0bb8a920", 00:09:50.699 "is_configured": true, 00:09:50.699 "data_offset": 0, 00:09:50.699 "data_size": 65536 00:09:50.699 }, 00:09:50.699 { 00:09:50.699 "name": "BaseBdev3", 00:09:50.699 "uuid": "95c1befe-587a-427e-8d90-28ccb232febc", 00:09:50.699 "is_configured": true, 00:09:50.699 "data_offset": 0, 00:09:50.699 "data_size": 65536 00:09:50.699 } 00:09:50.699 ] 00:09:50.699 }' 00:09:50.699 12:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.699 12:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.267 12:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:51.267 12:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.267 12:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.267 [2024-11-19 12:01:54.427016] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:51.267 12:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.267 12:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:51.268 12:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:51.268 12:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:51.268 12:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:51.268 12:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:51.268 12:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:51.268 12:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.268 12:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.268 12:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.268 12:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.268 12:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.268 12:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:51.268 12:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.268 12:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.268 12:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.268 12:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.268 "name": "Existed_Raid", 00:09:51.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.268 "strip_size_kb": 0, 00:09:51.268 "state": "configuring", 00:09:51.268 "raid_level": "raid1", 00:09:51.268 "superblock": false, 00:09:51.268 "num_base_bdevs": 3, 00:09:51.268 "num_base_bdevs_discovered": 1, 00:09:51.268 "num_base_bdevs_operational": 3, 00:09:51.268 "base_bdevs_list": [ 00:09:51.268 { 00:09:51.268 "name": "BaseBdev1", 00:09:51.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.268 "is_configured": false, 00:09:51.268 "data_offset": 0, 00:09:51.268 "data_size": 0 00:09:51.268 }, 00:09:51.268 { 00:09:51.268 "name": null, 00:09:51.268 "uuid": "a2f3e7bd-dc7f-4c72-87fb-f9ae0bb8a920", 00:09:51.268 "is_configured": false, 00:09:51.268 "data_offset": 0, 00:09:51.268 "data_size": 65536 00:09:51.268 }, 00:09:51.268 { 00:09:51.268 "name": "BaseBdev3", 00:09:51.268 "uuid": "95c1befe-587a-427e-8d90-28ccb232febc", 00:09:51.268 "is_configured": true, 00:09:51.268 "data_offset": 0, 00:09:51.268 "data_size": 65536 00:09:51.268 } 00:09:51.268 ] 00:09:51.268 }' 00:09:51.268 12:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.268 12:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.528 12:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.528 12:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.528 12:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.528 12:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:51.528 12:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.788 12:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:51.788 12:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:51.788 12:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.788 12:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.788 [2024-11-19 12:01:54.966688] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:51.788 BaseBdev1 00:09:51.788 12:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.788 12:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:51.788 12:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:51.788 12:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:51.788 12:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:51.788 12:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:51.788 12:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:51.788 12:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:51.788 12:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.788 12:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.788 12:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.788 12:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:51.788 12:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.788 12:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.788 [ 00:09:51.788 { 00:09:51.788 "name": "BaseBdev1", 00:09:51.788 "aliases": [ 00:09:51.788 "00eb2f9d-bd30-4ebc-85f6-f725a4d8d1a6" 00:09:51.788 ], 00:09:51.788 "product_name": "Malloc disk", 00:09:51.788 "block_size": 512, 00:09:51.788 "num_blocks": 65536, 00:09:51.788 "uuid": "00eb2f9d-bd30-4ebc-85f6-f725a4d8d1a6", 00:09:51.788 "assigned_rate_limits": { 00:09:51.788 "rw_ios_per_sec": 0, 00:09:51.788 "rw_mbytes_per_sec": 0, 00:09:51.788 "r_mbytes_per_sec": 0, 00:09:51.788 "w_mbytes_per_sec": 0 00:09:51.788 }, 00:09:51.788 "claimed": true, 00:09:51.788 "claim_type": "exclusive_write", 00:09:51.788 "zoned": false, 00:09:51.788 "supported_io_types": { 00:09:51.788 "read": true, 00:09:51.788 "write": true, 00:09:51.788 "unmap": true, 00:09:51.788 "flush": true, 00:09:51.788 "reset": true, 00:09:51.788 "nvme_admin": false, 00:09:51.788 "nvme_io": false, 00:09:51.788 "nvme_io_md": false, 00:09:51.788 "write_zeroes": true, 00:09:51.788 "zcopy": true, 00:09:51.788 "get_zone_info": false, 00:09:51.788 "zone_management": false, 00:09:51.788 "zone_append": false, 00:09:51.788 "compare": false, 00:09:51.788 "compare_and_write": false, 00:09:51.788 "abort": true, 00:09:51.788 "seek_hole": false, 00:09:51.788 "seek_data": false, 00:09:51.788 "copy": true, 00:09:51.788 "nvme_iov_md": false 00:09:51.788 }, 00:09:51.788 "memory_domains": [ 00:09:51.788 { 00:09:51.788 "dma_device_id": "system", 00:09:51.788 "dma_device_type": 1 00:09:51.788 }, 00:09:51.788 { 00:09:51.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.788 "dma_device_type": 2 00:09:51.788 } 00:09:51.788 ], 00:09:51.788 "driver_specific": {} 00:09:51.788 } 00:09:51.788 ] 00:09:51.788 12:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.788 12:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:51.788 12:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:51.788 12:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:51.788 12:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:51.788 12:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:51.788 12:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:51.788 12:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:51.788 12:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.788 12:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.788 12:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.788 12:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.788 12:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:51.788 12:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.788 12:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.788 12:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.788 12:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.788 12:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.788 "name": "Existed_Raid", 00:09:51.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.788 "strip_size_kb": 0, 00:09:51.788 "state": "configuring", 00:09:51.789 "raid_level": "raid1", 00:09:51.789 "superblock": false, 00:09:51.789 "num_base_bdevs": 3, 00:09:51.789 "num_base_bdevs_discovered": 2, 00:09:51.789 "num_base_bdevs_operational": 3, 00:09:51.789 "base_bdevs_list": [ 00:09:51.789 { 00:09:51.789 "name": "BaseBdev1", 00:09:51.789 "uuid": "00eb2f9d-bd30-4ebc-85f6-f725a4d8d1a6", 00:09:51.789 "is_configured": true, 00:09:51.789 "data_offset": 0, 00:09:51.789 "data_size": 65536 00:09:51.789 }, 00:09:51.789 { 00:09:51.789 "name": null, 00:09:51.789 "uuid": "a2f3e7bd-dc7f-4c72-87fb-f9ae0bb8a920", 00:09:51.789 "is_configured": false, 00:09:51.789 "data_offset": 0, 00:09:51.789 "data_size": 65536 00:09:51.789 }, 00:09:51.789 { 00:09:51.789 "name": "BaseBdev3", 00:09:51.789 "uuid": "95c1befe-587a-427e-8d90-28ccb232febc", 00:09:51.789 "is_configured": true, 00:09:51.789 "data_offset": 0, 00:09:51.789 "data_size": 65536 00:09:51.789 } 00:09:51.789 ] 00:09:51.789 }' 00:09:51.789 12:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.789 12:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.358 12:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:52.358 12:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.358 12:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.358 12:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.358 12:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.358 12:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:52.358 12:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:52.358 12:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.358 12:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.358 [2024-11-19 12:01:55.501803] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:52.358 12:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.358 12:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:52.358 12:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:52.358 12:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:52.358 12:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:52.358 12:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:52.358 12:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:52.358 12:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.358 12:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.358 12:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.358 12:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.358 12:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:52.358 12:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.358 12:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.358 12:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.358 12:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.358 12:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.358 "name": "Existed_Raid", 00:09:52.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.358 "strip_size_kb": 0, 00:09:52.358 "state": "configuring", 00:09:52.358 "raid_level": "raid1", 00:09:52.358 "superblock": false, 00:09:52.358 "num_base_bdevs": 3, 00:09:52.358 "num_base_bdevs_discovered": 1, 00:09:52.358 "num_base_bdevs_operational": 3, 00:09:52.358 "base_bdevs_list": [ 00:09:52.358 { 00:09:52.358 "name": "BaseBdev1", 00:09:52.358 "uuid": "00eb2f9d-bd30-4ebc-85f6-f725a4d8d1a6", 00:09:52.358 "is_configured": true, 00:09:52.358 "data_offset": 0, 00:09:52.358 "data_size": 65536 00:09:52.358 }, 00:09:52.358 { 00:09:52.358 "name": null, 00:09:52.358 "uuid": "a2f3e7bd-dc7f-4c72-87fb-f9ae0bb8a920", 00:09:52.358 "is_configured": false, 00:09:52.358 "data_offset": 0, 00:09:52.358 "data_size": 65536 00:09:52.358 }, 00:09:52.358 { 00:09:52.358 "name": null, 00:09:52.358 "uuid": "95c1befe-587a-427e-8d90-28ccb232febc", 00:09:52.358 "is_configured": false, 00:09:52.358 "data_offset": 0, 00:09:52.358 "data_size": 65536 00:09:52.358 } 00:09:52.358 ] 00:09:52.358 }' 00:09:52.358 12:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.358 12:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.618 12:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.618 12:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.618 12:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.618 12:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:52.618 12:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.878 12:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:52.878 12:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:52.878 12:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.878 12:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.878 [2024-11-19 12:01:56.001088] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:52.878 12:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.878 12:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:52.878 12:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:52.878 12:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:52.878 12:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:52.878 12:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:52.878 12:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:52.878 12:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.878 12:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.878 12:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.878 12:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.878 12:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.878 12:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.878 12:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.878 12:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:52.878 12:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.878 12:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.878 "name": "Existed_Raid", 00:09:52.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.878 "strip_size_kb": 0, 00:09:52.878 "state": "configuring", 00:09:52.878 "raid_level": "raid1", 00:09:52.878 "superblock": false, 00:09:52.878 "num_base_bdevs": 3, 00:09:52.878 "num_base_bdevs_discovered": 2, 00:09:52.878 "num_base_bdevs_operational": 3, 00:09:52.878 "base_bdevs_list": [ 00:09:52.878 { 00:09:52.878 "name": "BaseBdev1", 00:09:52.878 "uuid": "00eb2f9d-bd30-4ebc-85f6-f725a4d8d1a6", 00:09:52.878 "is_configured": true, 00:09:52.878 "data_offset": 0, 00:09:52.878 "data_size": 65536 00:09:52.878 }, 00:09:52.878 { 00:09:52.878 "name": null, 00:09:52.878 "uuid": "a2f3e7bd-dc7f-4c72-87fb-f9ae0bb8a920", 00:09:52.878 "is_configured": false, 00:09:52.878 "data_offset": 0, 00:09:52.878 "data_size": 65536 00:09:52.878 }, 00:09:52.878 { 00:09:52.878 "name": "BaseBdev3", 00:09:52.878 "uuid": "95c1befe-587a-427e-8d90-28ccb232febc", 00:09:52.878 "is_configured": true, 00:09:52.878 "data_offset": 0, 00:09:52.878 "data_size": 65536 00:09:52.878 } 00:09:52.878 ] 00:09:52.878 }' 00:09:52.878 12:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.878 12:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.138 12:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:53.138 12:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.138 12:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.138 12:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.138 12:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.397 12:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:53.397 12:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:53.397 12:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.397 12:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.397 [2024-11-19 12:01:56.520212] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:53.397 12:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.397 12:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:53.397 12:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:53.397 12:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:53.397 12:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:53.398 12:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:53.398 12:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:53.398 12:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.398 12:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.398 12:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.398 12:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.398 12:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:53.398 12:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.398 12:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.398 12:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.398 12:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.398 12:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.398 "name": "Existed_Raid", 00:09:53.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.398 "strip_size_kb": 0, 00:09:53.398 "state": "configuring", 00:09:53.398 "raid_level": "raid1", 00:09:53.398 "superblock": false, 00:09:53.398 "num_base_bdevs": 3, 00:09:53.398 "num_base_bdevs_discovered": 1, 00:09:53.398 "num_base_bdevs_operational": 3, 00:09:53.398 "base_bdevs_list": [ 00:09:53.398 { 00:09:53.398 "name": null, 00:09:53.398 "uuid": "00eb2f9d-bd30-4ebc-85f6-f725a4d8d1a6", 00:09:53.398 "is_configured": false, 00:09:53.398 "data_offset": 0, 00:09:53.398 "data_size": 65536 00:09:53.398 }, 00:09:53.398 { 00:09:53.398 "name": null, 00:09:53.398 "uuid": "a2f3e7bd-dc7f-4c72-87fb-f9ae0bb8a920", 00:09:53.398 "is_configured": false, 00:09:53.398 "data_offset": 0, 00:09:53.398 "data_size": 65536 00:09:53.398 }, 00:09:53.398 { 00:09:53.398 "name": "BaseBdev3", 00:09:53.398 "uuid": "95c1befe-587a-427e-8d90-28ccb232febc", 00:09:53.398 "is_configured": true, 00:09:53.398 "data_offset": 0, 00:09:53.398 "data_size": 65536 00:09:53.398 } 00:09:53.398 ] 00:09:53.398 }' 00:09:53.398 12:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.398 12:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.657 12:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.657 12:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.657 12:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.657 12:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:53.657 12:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.916 12:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:53.916 12:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:53.916 12:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.916 12:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.916 [2024-11-19 12:01:57.049336] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:53.916 12:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.916 12:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:53.916 12:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:53.916 12:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:53.916 12:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:53.916 12:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:53.916 12:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:53.916 12:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.916 12:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.916 12:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.916 12:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.916 12:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.916 12:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.916 12:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:53.916 12:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.916 12:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.916 12:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.916 "name": "Existed_Raid", 00:09:53.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.916 "strip_size_kb": 0, 00:09:53.916 "state": "configuring", 00:09:53.917 "raid_level": "raid1", 00:09:53.917 "superblock": false, 00:09:53.917 "num_base_bdevs": 3, 00:09:53.917 "num_base_bdevs_discovered": 2, 00:09:53.917 "num_base_bdevs_operational": 3, 00:09:53.917 "base_bdevs_list": [ 00:09:53.917 { 00:09:53.917 "name": null, 00:09:53.917 "uuid": "00eb2f9d-bd30-4ebc-85f6-f725a4d8d1a6", 00:09:53.917 "is_configured": false, 00:09:53.917 "data_offset": 0, 00:09:53.917 "data_size": 65536 00:09:53.917 }, 00:09:53.917 { 00:09:53.917 "name": "BaseBdev2", 00:09:53.917 "uuid": "a2f3e7bd-dc7f-4c72-87fb-f9ae0bb8a920", 00:09:53.917 "is_configured": true, 00:09:53.917 "data_offset": 0, 00:09:53.917 "data_size": 65536 00:09:53.917 }, 00:09:53.917 { 00:09:53.917 "name": "BaseBdev3", 00:09:53.917 "uuid": "95c1befe-587a-427e-8d90-28ccb232febc", 00:09:53.917 "is_configured": true, 00:09:53.917 "data_offset": 0, 00:09:53.917 "data_size": 65536 00:09:53.917 } 00:09:53.917 ] 00:09:53.917 }' 00:09:53.917 12:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.917 12:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.176 12:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:54.176 12:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.176 12:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.176 12:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.176 12:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.176 12:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:54.177 12:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.177 12:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:54.177 12:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.177 12:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.177 12:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.177 12:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 00eb2f9d-bd30-4ebc-85f6-f725a4d8d1a6 00:09:54.177 12:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.177 12:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.436 [2024-11-19 12:01:57.581036] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:54.436 [2024-11-19 12:01:57.581088] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:54.436 [2024-11-19 12:01:57.581095] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:54.436 [2024-11-19 12:01:57.581339] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:54.436 [2024-11-19 12:01:57.581488] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:54.436 [2024-11-19 12:01:57.581501] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:54.436 [2024-11-19 12:01:57.581750] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:54.436 NewBaseBdev 00:09:54.436 12:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.436 12:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:54.436 12:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:54.436 12:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:54.436 12:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:54.436 12:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:54.436 12:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:54.436 12:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:54.436 12:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.436 12:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.436 12:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.436 12:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:54.436 12:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.436 12:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.436 [ 00:09:54.436 { 00:09:54.436 "name": "NewBaseBdev", 00:09:54.436 "aliases": [ 00:09:54.436 "00eb2f9d-bd30-4ebc-85f6-f725a4d8d1a6" 00:09:54.436 ], 00:09:54.436 "product_name": "Malloc disk", 00:09:54.436 "block_size": 512, 00:09:54.436 "num_blocks": 65536, 00:09:54.436 "uuid": "00eb2f9d-bd30-4ebc-85f6-f725a4d8d1a6", 00:09:54.436 "assigned_rate_limits": { 00:09:54.436 "rw_ios_per_sec": 0, 00:09:54.436 "rw_mbytes_per_sec": 0, 00:09:54.436 "r_mbytes_per_sec": 0, 00:09:54.436 "w_mbytes_per_sec": 0 00:09:54.436 }, 00:09:54.436 "claimed": true, 00:09:54.436 "claim_type": "exclusive_write", 00:09:54.436 "zoned": false, 00:09:54.436 "supported_io_types": { 00:09:54.436 "read": true, 00:09:54.436 "write": true, 00:09:54.436 "unmap": true, 00:09:54.436 "flush": true, 00:09:54.436 "reset": true, 00:09:54.436 "nvme_admin": false, 00:09:54.436 "nvme_io": false, 00:09:54.436 "nvme_io_md": false, 00:09:54.436 "write_zeroes": true, 00:09:54.436 "zcopy": true, 00:09:54.436 "get_zone_info": false, 00:09:54.436 "zone_management": false, 00:09:54.436 "zone_append": false, 00:09:54.436 "compare": false, 00:09:54.436 "compare_and_write": false, 00:09:54.436 "abort": true, 00:09:54.436 "seek_hole": false, 00:09:54.436 "seek_data": false, 00:09:54.436 "copy": true, 00:09:54.436 "nvme_iov_md": false 00:09:54.436 }, 00:09:54.436 "memory_domains": [ 00:09:54.436 { 00:09:54.436 "dma_device_id": "system", 00:09:54.436 "dma_device_type": 1 00:09:54.436 }, 00:09:54.436 { 00:09:54.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.436 "dma_device_type": 2 00:09:54.436 } 00:09:54.436 ], 00:09:54.436 "driver_specific": {} 00:09:54.436 } 00:09:54.436 ] 00:09:54.436 12:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.436 12:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:54.436 12:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:54.436 12:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:54.436 12:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:54.436 12:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:54.436 12:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:54.436 12:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:54.436 12:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.436 12:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.436 12:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.436 12:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.436 12:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.436 12:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.436 12:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.436 12:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.436 12:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.436 12:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.436 "name": "Existed_Raid", 00:09:54.436 "uuid": "fec3e31c-cfb4-477d-9783-c9f7f6c1bfc0", 00:09:54.436 "strip_size_kb": 0, 00:09:54.436 "state": "online", 00:09:54.436 "raid_level": "raid1", 00:09:54.436 "superblock": false, 00:09:54.436 "num_base_bdevs": 3, 00:09:54.436 "num_base_bdevs_discovered": 3, 00:09:54.436 "num_base_bdevs_operational": 3, 00:09:54.436 "base_bdevs_list": [ 00:09:54.436 { 00:09:54.437 "name": "NewBaseBdev", 00:09:54.437 "uuid": "00eb2f9d-bd30-4ebc-85f6-f725a4d8d1a6", 00:09:54.437 "is_configured": true, 00:09:54.437 "data_offset": 0, 00:09:54.437 "data_size": 65536 00:09:54.437 }, 00:09:54.437 { 00:09:54.437 "name": "BaseBdev2", 00:09:54.437 "uuid": "a2f3e7bd-dc7f-4c72-87fb-f9ae0bb8a920", 00:09:54.437 "is_configured": true, 00:09:54.437 "data_offset": 0, 00:09:54.437 "data_size": 65536 00:09:54.437 }, 00:09:54.437 { 00:09:54.437 "name": "BaseBdev3", 00:09:54.437 "uuid": "95c1befe-587a-427e-8d90-28ccb232febc", 00:09:54.437 "is_configured": true, 00:09:54.437 "data_offset": 0, 00:09:54.437 "data_size": 65536 00:09:54.437 } 00:09:54.437 ] 00:09:54.437 }' 00:09:54.437 12:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.437 12:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.696 12:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:54.696 12:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:54.696 12:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:54.696 12:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:54.696 12:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:54.696 12:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:54.696 12:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:54.696 12:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:54.696 12:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.696 12:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.696 [2024-11-19 12:01:58.036627] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:54.696 12:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.957 12:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:54.957 "name": "Existed_Raid", 00:09:54.957 "aliases": [ 00:09:54.957 "fec3e31c-cfb4-477d-9783-c9f7f6c1bfc0" 00:09:54.957 ], 00:09:54.957 "product_name": "Raid Volume", 00:09:54.957 "block_size": 512, 00:09:54.957 "num_blocks": 65536, 00:09:54.957 "uuid": "fec3e31c-cfb4-477d-9783-c9f7f6c1bfc0", 00:09:54.957 "assigned_rate_limits": { 00:09:54.957 "rw_ios_per_sec": 0, 00:09:54.957 "rw_mbytes_per_sec": 0, 00:09:54.957 "r_mbytes_per_sec": 0, 00:09:54.957 "w_mbytes_per_sec": 0 00:09:54.957 }, 00:09:54.957 "claimed": false, 00:09:54.957 "zoned": false, 00:09:54.957 "supported_io_types": { 00:09:54.957 "read": true, 00:09:54.957 "write": true, 00:09:54.957 "unmap": false, 00:09:54.957 "flush": false, 00:09:54.957 "reset": true, 00:09:54.957 "nvme_admin": false, 00:09:54.957 "nvme_io": false, 00:09:54.957 "nvme_io_md": false, 00:09:54.957 "write_zeroes": true, 00:09:54.957 "zcopy": false, 00:09:54.957 "get_zone_info": false, 00:09:54.957 "zone_management": false, 00:09:54.957 "zone_append": false, 00:09:54.957 "compare": false, 00:09:54.957 "compare_and_write": false, 00:09:54.957 "abort": false, 00:09:54.957 "seek_hole": false, 00:09:54.957 "seek_data": false, 00:09:54.957 "copy": false, 00:09:54.957 "nvme_iov_md": false 00:09:54.957 }, 00:09:54.957 "memory_domains": [ 00:09:54.957 { 00:09:54.957 "dma_device_id": "system", 00:09:54.957 "dma_device_type": 1 00:09:54.957 }, 00:09:54.957 { 00:09:54.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.957 "dma_device_type": 2 00:09:54.957 }, 00:09:54.957 { 00:09:54.957 "dma_device_id": "system", 00:09:54.957 "dma_device_type": 1 00:09:54.957 }, 00:09:54.957 { 00:09:54.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.957 "dma_device_type": 2 00:09:54.957 }, 00:09:54.957 { 00:09:54.957 "dma_device_id": "system", 00:09:54.957 "dma_device_type": 1 00:09:54.957 }, 00:09:54.957 { 00:09:54.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.957 "dma_device_type": 2 00:09:54.957 } 00:09:54.957 ], 00:09:54.957 "driver_specific": { 00:09:54.957 "raid": { 00:09:54.957 "uuid": "fec3e31c-cfb4-477d-9783-c9f7f6c1bfc0", 00:09:54.957 "strip_size_kb": 0, 00:09:54.957 "state": "online", 00:09:54.957 "raid_level": "raid1", 00:09:54.957 "superblock": false, 00:09:54.957 "num_base_bdevs": 3, 00:09:54.957 "num_base_bdevs_discovered": 3, 00:09:54.957 "num_base_bdevs_operational": 3, 00:09:54.957 "base_bdevs_list": [ 00:09:54.957 { 00:09:54.957 "name": "NewBaseBdev", 00:09:54.957 "uuid": "00eb2f9d-bd30-4ebc-85f6-f725a4d8d1a6", 00:09:54.957 "is_configured": true, 00:09:54.957 "data_offset": 0, 00:09:54.957 "data_size": 65536 00:09:54.957 }, 00:09:54.957 { 00:09:54.957 "name": "BaseBdev2", 00:09:54.957 "uuid": "a2f3e7bd-dc7f-4c72-87fb-f9ae0bb8a920", 00:09:54.957 "is_configured": true, 00:09:54.957 "data_offset": 0, 00:09:54.957 "data_size": 65536 00:09:54.957 }, 00:09:54.957 { 00:09:54.957 "name": "BaseBdev3", 00:09:54.957 "uuid": "95c1befe-587a-427e-8d90-28ccb232febc", 00:09:54.957 "is_configured": true, 00:09:54.957 "data_offset": 0, 00:09:54.957 "data_size": 65536 00:09:54.957 } 00:09:54.957 ] 00:09:54.957 } 00:09:54.957 } 00:09:54.957 }' 00:09:54.957 12:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:54.957 12:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:54.957 BaseBdev2 00:09:54.957 BaseBdev3' 00:09:54.957 12:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:54.957 12:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:54.957 12:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:54.957 12:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:54.957 12:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.957 12:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.957 12:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:54.957 12:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.957 12:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:54.957 12:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:54.957 12:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:54.957 12:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:54.957 12:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:54.957 12:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.957 12:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.957 12:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.957 12:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:54.957 12:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:54.957 12:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:54.957 12:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:54.957 12:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.957 12:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.957 12:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:54.957 12:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.957 12:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:54.957 12:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:54.957 12:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:54.957 12:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.957 12:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.957 [2024-11-19 12:01:58.279891] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:54.957 [2024-11-19 12:01:58.280027] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:54.957 [2024-11-19 12:01:58.280100] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:54.957 [2024-11-19 12:01:58.280375] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:54.957 [2024-11-19 12:01:58.280385] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:54.957 12:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.957 12:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67435 00:09:54.957 12:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 67435 ']' 00:09:54.957 12:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 67435 00:09:54.957 12:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:54.957 12:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:54.957 12:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67435 00:09:54.957 killing process with pid 67435 00:09:54.957 12:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:54.957 12:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:54.957 12:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67435' 00:09:54.957 12:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 67435 00:09:54.957 [2024-11-19 12:01:58.326919] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:54.957 12:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 67435 00:09:55.525 [2024-11-19 12:01:58.627165] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:56.461 12:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:56.461 00:09:56.461 real 0m10.405s 00:09:56.461 user 0m16.562s 00:09:56.461 sys 0m1.810s 00:09:56.461 12:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:56.461 ************************************ 00:09:56.461 END TEST raid_state_function_test 00:09:56.461 ************************************ 00:09:56.461 12:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.461 12:01:59 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:09:56.461 12:01:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:56.461 12:01:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:56.461 12:01:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:56.461 ************************************ 00:09:56.461 START TEST raid_state_function_test_sb 00:09:56.461 ************************************ 00:09:56.461 12:01:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:09:56.461 12:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:56.461 12:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:56.461 12:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:56.461 12:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:56.461 12:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:56.461 12:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:56.461 12:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:56.461 12:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:56.461 12:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:56.461 12:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:56.461 12:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:56.461 12:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:56.461 12:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:56.461 12:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:56.461 12:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:56.461 12:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:56.461 12:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:56.461 12:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:56.461 12:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:56.461 12:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:56.461 12:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:56.461 12:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:56.461 12:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:56.461 12:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:56.461 12:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:56.461 12:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68056 00:09:56.461 12:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:56.461 12:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68056' 00:09:56.461 Process raid pid: 68056 00:09:56.461 12:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68056 00:09:56.461 12:01:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 68056 ']' 00:09:56.461 12:01:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:56.461 12:01:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:56.461 12:01:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:56.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:56.461 12:01:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:56.461 12:01:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.721 [2024-11-19 12:01:59.889868] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:09:56.721 [2024-11-19 12:01:59.890143] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:56.721 [2024-11-19 12:02:00.070845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:56.979 [2024-11-19 12:02:00.187713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.238 [2024-11-19 12:02:00.389075] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:57.238 [2024-11-19 12:02:00.389183] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:57.498 12:02:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:57.498 12:02:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:57.498 12:02:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:57.498 12:02:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.498 12:02:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.498 [2024-11-19 12:02:00.723647] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:57.498 [2024-11-19 12:02:00.723789] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:57.498 [2024-11-19 12:02:00.723803] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:57.498 [2024-11-19 12:02:00.723812] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:57.498 [2024-11-19 12:02:00.723819] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:57.498 [2024-11-19 12:02:00.723828] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:57.498 12:02:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.498 12:02:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:57.498 12:02:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:57.498 12:02:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:57.498 12:02:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:57.498 12:02:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:57.498 12:02:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:57.498 12:02:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.498 12:02:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.498 12:02:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.498 12:02:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.498 12:02:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.498 12:02:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.498 12:02:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.498 12:02:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.498 12:02:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.498 12:02:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.498 "name": "Existed_Raid", 00:09:57.498 "uuid": "2ac590fe-81f9-491a-8be4-fbee012f27bd", 00:09:57.498 "strip_size_kb": 0, 00:09:57.498 "state": "configuring", 00:09:57.498 "raid_level": "raid1", 00:09:57.498 "superblock": true, 00:09:57.498 "num_base_bdevs": 3, 00:09:57.498 "num_base_bdevs_discovered": 0, 00:09:57.498 "num_base_bdevs_operational": 3, 00:09:57.498 "base_bdevs_list": [ 00:09:57.498 { 00:09:57.498 "name": "BaseBdev1", 00:09:57.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.498 "is_configured": false, 00:09:57.498 "data_offset": 0, 00:09:57.498 "data_size": 0 00:09:57.498 }, 00:09:57.498 { 00:09:57.498 "name": "BaseBdev2", 00:09:57.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.498 "is_configured": false, 00:09:57.498 "data_offset": 0, 00:09:57.498 "data_size": 0 00:09:57.498 }, 00:09:57.498 { 00:09:57.498 "name": "BaseBdev3", 00:09:57.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.498 "is_configured": false, 00:09:57.498 "data_offset": 0, 00:09:57.498 "data_size": 0 00:09:57.498 } 00:09:57.498 ] 00:09:57.498 }' 00:09:57.498 12:02:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.498 12:02:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.069 12:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:58.069 12:02:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.069 12:02:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.069 [2024-11-19 12:02:01.154884] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:58.069 [2024-11-19 12:02:01.155057] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:58.069 12:02:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.069 12:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:58.069 12:02:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.069 12:02:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.069 [2024-11-19 12:02:01.162851] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:58.069 [2024-11-19 12:02:01.162951] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:58.069 [2024-11-19 12:02:01.162982] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:58.069 [2024-11-19 12:02:01.163023] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:58.069 [2024-11-19 12:02:01.163055] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:58.069 [2024-11-19 12:02:01.163080] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:58.069 12:02:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.069 12:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:58.069 12:02:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.069 12:02:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.069 [2024-11-19 12:02:01.207305] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:58.069 BaseBdev1 00:09:58.069 12:02:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.069 12:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:58.069 12:02:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:58.069 12:02:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:58.069 12:02:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:58.069 12:02:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:58.069 12:02:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:58.069 12:02:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:58.069 12:02:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.069 12:02:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.069 12:02:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.069 12:02:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:58.069 12:02:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.069 12:02:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.069 [ 00:09:58.069 { 00:09:58.069 "name": "BaseBdev1", 00:09:58.069 "aliases": [ 00:09:58.069 "c570cdbd-6835-4b42-b11a-838520948e0c" 00:09:58.069 ], 00:09:58.069 "product_name": "Malloc disk", 00:09:58.069 "block_size": 512, 00:09:58.069 "num_blocks": 65536, 00:09:58.069 "uuid": "c570cdbd-6835-4b42-b11a-838520948e0c", 00:09:58.069 "assigned_rate_limits": { 00:09:58.069 "rw_ios_per_sec": 0, 00:09:58.069 "rw_mbytes_per_sec": 0, 00:09:58.069 "r_mbytes_per_sec": 0, 00:09:58.069 "w_mbytes_per_sec": 0 00:09:58.069 }, 00:09:58.069 "claimed": true, 00:09:58.069 "claim_type": "exclusive_write", 00:09:58.069 "zoned": false, 00:09:58.069 "supported_io_types": { 00:09:58.069 "read": true, 00:09:58.069 "write": true, 00:09:58.069 "unmap": true, 00:09:58.069 "flush": true, 00:09:58.069 "reset": true, 00:09:58.069 "nvme_admin": false, 00:09:58.069 "nvme_io": false, 00:09:58.069 "nvme_io_md": false, 00:09:58.069 "write_zeroes": true, 00:09:58.069 "zcopy": true, 00:09:58.069 "get_zone_info": false, 00:09:58.069 "zone_management": false, 00:09:58.069 "zone_append": false, 00:09:58.069 "compare": false, 00:09:58.069 "compare_and_write": false, 00:09:58.069 "abort": true, 00:09:58.069 "seek_hole": false, 00:09:58.069 "seek_data": false, 00:09:58.069 "copy": true, 00:09:58.069 "nvme_iov_md": false 00:09:58.069 }, 00:09:58.069 "memory_domains": [ 00:09:58.069 { 00:09:58.069 "dma_device_id": "system", 00:09:58.069 "dma_device_type": 1 00:09:58.069 }, 00:09:58.069 { 00:09:58.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.069 "dma_device_type": 2 00:09:58.069 } 00:09:58.069 ], 00:09:58.069 "driver_specific": {} 00:09:58.069 } 00:09:58.069 ] 00:09:58.069 12:02:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.069 12:02:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:58.069 12:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:58.069 12:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.069 12:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.069 12:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:58.069 12:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:58.069 12:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:58.069 12:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.069 12:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.069 12:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.069 12:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.069 12:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.069 12:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.069 12:02:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.069 12:02:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.069 12:02:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.069 12:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.069 "name": "Existed_Raid", 00:09:58.069 "uuid": "3aa8cf16-d0db-4639-80ce-02206f746354", 00:09:58.069 "strip_size_kb": 0, 00:09:58.069 "state": "configuring", 00:09:58.069 "raid_level": "raid1", 00:09:58.069 "superblock": true, 00:09:58.069 "num_base_bdevs": 3, 00:09:58.069 "num_base_bdevs_discovered": 1, 00:09:58.069 "num_base_bdevs_operational": 3, 00:09:58.069 "base_bdevs_list": [ 00:09:58.069 { 00:09:58.069 "name": "BaseBdev1", 00:09:58.069 "uuid": "c570cdbd-6835-4b42-b11a-838520948e0c", 00:09:58.069 "is_configured": true, 00:09:58.069 "data_offset": 2048, 00:09:58.069 "data_size": 63488 00:09:58.069 }, 00:09:58.069 { 00:09:58.069 "name": "BaseBdev2", 00:09:58.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.069 "is_configured": false, 00:09:58.069 "data_offset": 0, 00:09:58.069 "data_size": 0 00:09:58.069 }, 00:09:58.069 { 00:09:58.069 "name": "BaseBdev3", 00:09:58.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.069 "is_configured": false, 00:09:58.069 "data_offset": 0, 00:09:58.069 "data_size": 0 00:09:58.069 } 00:09:58.069 ] 00:09:58.069 }' 00:09:58.069 12:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.069 12:02:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.329 12:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:58.329 12:02:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.329 12:02:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.329 [2024-11-19 12:02:01.678538] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:58.329 [2024-11-19 12:02:01.678675] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:58.329 12:02:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.329 12:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:58.329 12:02:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.329 12:02:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.329 [2024-11-19 12:02:01.686570] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:58.329 [2024-11-19 12:02:01.688468] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:58.329 [2024-11-19 12:02:01.688526] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:58.329 [2024-11-19 12:02:01.688536] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:58.329 [2024-11-19 12:02:01.688546] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:58.329 12:02:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.329 12:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:58.329 12:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:58.329 12:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:58.329 12:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.329 12:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.329 12:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:58.330 12:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:58.330 12:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:58.330 12:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.330 12:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.330 12:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.330 12:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.330 12:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.330 12:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.330 12:02:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.330 12:02:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.588 12:02:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.588 12:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.588 "name": "Existed_Raid", 00:09:58.588 "uuid": "82e4043e-dee9-4b59-bf73-d441e47c6bf9", 00:09:58.588 "strip_size_kb": 0, 00:09:58.588 "state": "configuring", 00:09:58.588 "raid_level": "raid1", 00:09:58.588 "superblock": true, 00:09:58.588 "num_base_bdevs": 3, 00:09:58.588 "num_base_bdevs_discovered": 1, 00:09:58.588 "num_base_bdevs_operational": 3, 00:09:58.588 "base_bdevs_list": [ 00:09:58.588 { 00:09:58.588 "name": "BaseBdev1", 00:09:58.588 "uuid": "c570cdbd-6835-4b42-b11a-838520948e0c", 00:09:58.588 "is_configured": true, 00:09:58.588 "data_offset": 2048, 00:09:58.588 "data_size": 63488 00:09:58.588 }, 00:09:58.588 { 00:09:58.588 "name": "BaseBdev2", 00:09:58.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.588 "is_configured": false, 00:09:58.588 "data_offset": 0, 00:09:58.588 "data_size": 0 00:09:58.588 }, 00:09:58.588 { 00:09:58.588 "name": "BaseBdev3", 00:09:58.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.588 "is_configured": false, 00:09:58.588 "data_offset": 0, 00:09:58.588 "data_size": 0 00:09:58.588 } 00:09:58.588 ] 00:09:58.588 }' 00:09:58.588 12:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.588 12:02:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.848 12:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:58.848 12:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.848 12:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.848 [2024-11-19 12:02:02.121092] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:58.848 BaseBdev2 00:09:58.848 12:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.848 12:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:58.848 12:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:58.848 12:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:58.848 12:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:58.848 12:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:58.848 12:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:58.848 12:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:58.848 12:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.848 12:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.848 12:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.848 12:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:58.848 12:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.848 12:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.848 [ 00:09:58.848 { 00:09:58.848 "name": "BaseBdev2", 00:09:58.848 "aliases": [ 00:09:58.848 "70c21f96-b286-4740-b467-387cc255a084" 00:09:58.848 ], 00:09:58.848 "product_name": "Malloc disk", 00:09:58.848 "block_size": 512, 00:09:58.848 "num_blocks": 65536, 00:09:58.848 "uuid": "70c21f96-b286-4740-b467-387cc255a084", 00:09:58.848 "assigned_rate_limits": { 00:09:58.848 "rw_ios_per_sec": 0, 00:09:58.848 "rw_mbytes_per_sec": 0, 00:09:58.848 "r_mbytes_per_sec": 0, 00:09:58.848 "w_mbytes_per_sec": 0 00:09:58.848 }, 00:09:58.848 "claimed": true, 00:09:58.848 "claim_type": "exclusive_write", 00:09:58.848 "zoned": false, 00:09:58.848 "supported_io_types": { 00:09:58.848 "read": true, 00:09:58.848 "write": true, 00:09:58.848 "unmap": true, 00:09:58.848 "flush": true, 00:09:58.848 "reset": true, 00:09:58.848 "nvme_admin": false, 00:09:58.848 "nvme_io": false, 00:09:58.848 "nvme_io_md": false, 00:09:58.848 "write_zeroes": true, 00:09:58.848 "zcopy": true, 00:09:58.848 "get_zone_info": false, 00:09:58.848 "zone_management": false, 00:09:58.848 "zone_append": false, 00:09:58.848 "compare": false, 00:09:58.848 "compare_and_write": false, 00:09:58.848 "abort": true, 00:09:58.848 "seek_hole": false, 00:09:58.848 "seek_data": false, 00:09:58.848 "copy": true, 00:09:58.848 "nvme_iov_md": false 00:09:58.848 }, 00:09:58.848 "memory_domains": [ 00:09:58.848 { 00:09:58.848 "dma_device_id": "system", 00:09:58.848 "dma_device_type": 1 00:09:58.848 }, 00:09:58.848 { 00:09:58.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.848 "dma_device_type": 2 00:09:58.848 } 00:09:58.848 ], 00:09:58.848 "driver_specific": {} 00:09:58.848 } 00:09:58.848 ] 00:09:58.848 12:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.848 12:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:58.848 12:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:58.848 12:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:58.848 12:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:58.848 12:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.848 12:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.848 12:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:58.848 12:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:58.848 12:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:58.848 12:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.848 12:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.848 12:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.848 12:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.848 12:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.848 12:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.848 12:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.848 12:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.848 12:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.848 12:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.848 "name": "Existed_Raid", 00:09:58.848 "uuid": "82e4043e-dee9-4b59-bf73-d441e47c6bf9", 00:09:58.848 "strip_size_kb": 0, 00:09:58.848 "state": "configuring", 00:09:58.848 "raid_level": "raid1", 00:09:58.848 "superblock": true, 00:09:58.848 "num_base_bdevs": 3, 00:09:58.848 "num_base_bdevs_discovered": 2, 00:09:58.848 "num_base_bdevs_operational": 3, 00:09:58.848 "base_bdevs_list": [ 00:09:58.848 { 00:09:58.848 "name": "BaseBdev1", 00:09:58.848 "uuid": "c570cdbd-6835-4b42-b11a-838520948e0c", 00:09:58.848 "is_configured": true, 00:09:58.848 "data_offset": 2048, 00:09:58.848 "data_size": 63488 00:09:58.848 }, 00:09:58.848 { 00:09:58.848 "name": "BaseBdev2", 00:09:58.848 "uuid": "70c21f96-b286-4740-b467-387cc255a084", 00:09:58.848 "is_configured": true, 00:09:58.848 "data_offset": 2048, 00:09:58.848 "data_size": 63488 00:09:58.848 }, 00:09:58.848 { 00:09:58.848 "name": "BaseBdev3", 00:09:58.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.848 "is_configured": false, 00:09:58.848 "data_offset": 0, 00:09:58.848 "data_size": 0 00:09:58.848 } 00:09:58.848 ] 00:09:58.848 }' 00:09:58.848 12:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.849 12:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.418 12:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:59.418 12:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.418 12:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.418 [2024-11-19 12:02:02.609286] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:59.418 [2024-11-19 12:02:02.609631] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:59.418 [2024-11-19 12:02:02.609679] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:59.418 BaseBdev3 00:09:59.418 [2024-11-19 12:02:02.610034] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:59.418 [2024-11-19 12:02:02.610197] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:59.418 [2024-11-19 12:02:02.610207] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:59.418 [2024-11-19 12:02:02.610354] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:59.418 12:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.418 12:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:59.418 12:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:59.418 12:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:59.418 12:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:59.418 12:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:59.418 12:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:59.418 12:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:59.418 12:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.418 12:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.418 12:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.418 12:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:59.418 12:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.418 12:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.418 [ 00:09:59.418 { 00:09:59.418 "name": "BaseBdev3", 00:09:59.418 "aliases": [ 00:09:59.418 "c55c6acd-a96d-4caa-9e59-5ebcbf290ff8" 00:09:59.418 ], 00:09:59.418 "product_name": "Malloc disk", 00:09:59.419 "block_size": 512, 00:09:59.419 "num_blocks": 65536, 00:09:59.419 "uuid": "c55c6acd-a96d-4caa-9e59-5ebcbf290ff8", 00:09:59.419 "assigned_rate_limits": { 00:09:59.419 "rw_ios_per_sec": 0, 00:09:59.419 "rw_mbytes_per_sec": 0, 00:09:59.419 "r_mbytes_per_sec": 0, 00:09:59.419 "w_mbytes_per_sec": 0 00:09:59.419 }, 00:09:59.419 "claimed": true, 00:09:59.419 "claim_type": "exclusive_write", 00:09:59.419 "zoned": false, 00:09:59.419 "supported_io_types": { 00:09:59.419 "read": true, 00:09:59.419 "write": true, 00:09:59.419 "unmap": true, 00:09:59.419 "flush": true, 00:09:59.419 "reset": true, 00:09:59.419 "nvme_admin": false, 00:09:59.419 "nvme_io": false, 00:09:59.419 "nvme_io_md": false, 00:09:59.419 "write_zeroes": true, 00:09:59.419 "zcopy": true, 00:09:59.419 "get_zone_info": false, 00:09:59.419 "zone_management": false, 00:09:59.419 "zone_append": false, 00:09:59.419 "compare": false, 00:09:59.419 "compare_and_write": false, 00:09:59.419 "abort": true, 00:09:59.419 "seek_hole": false, 00:09:59.419 "seek_data": false, 00:09:59.419 "copy": true, 00:09:59.419 "nvme_iov_md": false 00:09:59.419 }, 00:09:59.419 "memory_domains": [ 00:09:59.419 { 00:09:59.419 "dma_device_id": "system", 00:09:59.419 "dma_device_type": 1 00:09:59.419 }, 00:09:59.419 { 00:09:59.419 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.419 "dma_device_type": 2 00:09:59.419 } 00:09:59.419 ], 00:09:59.419 "driver_specific": {} 00:09:59.419 } 00:09:59.419 ] 00:09:59.419 12:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.419 12:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:59.419 12:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:59.419 12:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:59.419 12:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:59.419 12:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.419 12:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:59.419 12:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:59.419 12:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:59.419 12:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:59.419 12:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.419 12:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.419 12:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.419 12:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.419 12:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.419 12:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.419 12:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.419 12:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.419 12:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.419 12:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.419 "name": "Existed_Raid", 00:09:59.419 "uuid": "82e4043e-dee9-4b59-bf73-d441e47c6bf9", 00:09:59.419 "strip_size_kb": 0, 00:09:59.419 "state": "online", 00:09:59.419 "raid_level": "raid1", 00:09:59.419 "superblock": true, 00:09:59.419 "num_base_bdevs": 3, 00:09:59.419 "num_base_bdevs_discovered": 3, 00:09:59.419 "num_base_bdevs_operational": 3, 00:09:59.419 "base_bdevs_list": [ 00:09:59.419 { 00:09:59.419 "name": "BaseBdev1", 00:09:59.419 "uuid": "c570cdbd-6835-4b42-b11a-838520948e0c", 00:09:59.419 "is_configured": true, 00:09:59.419 "data_offset": 2048, 00:09:59.419 "data_size": 63488 00:09:59.419 }, 00:09:59.419 { 00:09:59.419 "name": "BaseBdev2", 00:09:59.419 "uuid": "70c21f96-b286-4740-b467-387cc255a084", 00:09:59.419 "is_configured": true, 00:09:59.419 "data_offset": 2048, 00:09:59.419 "data_size": 63488 00:09:59.419 }, 00:09:59.419 { 00:09:59.419 "name": "BaseBdev3", 00:09:59.419 "uuid": "c55c6acd-a96d-4caa-9e59-5ebcbf290ff8", 00:09:59.419 "is_configured": true, 00:09:59.419 "data_offset": 2048, 00:09:59.419 "data_size": 63488 00:09:59.419 } 00:09:59.419 ] 00:09:59.419 }' 00:09:59.419 12:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.419 12:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.990 12:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:59.990 12:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:59.990 12:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:59.990 12:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:59.990 12:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:59.990 12:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:59.990 12:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:59.990 12:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:59.990 12:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.990 12:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.990 [2024-11-19 12:02:03.108851] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:59.990 12:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.990 12:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:59.990 "name": "Existed_Raid", 00:09:59.990 "aliases": [ 00:09:59.990 "82e4043e-dee9-4b59-bf73-d441e47c6bf9" 00:09:59.990 ], 00:09:59.990 "product_name": "Raid Volume", 00:09:59.990 "block_size": 512, 00:09:59.990 "num_blocks": 63488, 00:09:59.990 "uuid": "82e4043e-dee9-4b59-bf73-d441e47c6bf9", 00:09:59.990 "assigned_rate_limits": { 00:09:59.990 "rw_ios_per_sec": 0, 00:09:59.990 "rw_mbytes_per_sec": 0, 00:09:59.990 "r_mbytes_per_sec": 0, 00:09:59.990 "w_mbytes_per_sec": 0 00:09:59.990 }, 00:09:59.990 "claimed": false, 00:09:59.990 "zoned": false, 00:09:59.990 "supported_io_types": { 00:09:59.990 "read": true, 00:09:59.990 "write": true, 00:09:59.990 "unmap": false, 00:09:59.990 "flush": false, 00:09:59.990 "reset": true, 00:09:59.990 "nvme_admin": false, 00:09:59.990 "nvme_io": false, 00:09:59.990 "nvme_io_md": false, 00:09:59.990 "write_zeroes": true, 00:09:59.990 "zcopy": false, 00:09:59.990 "get_zone_info": false, 00:09:59.990 "zone_management": false, 00:09:59.990 "zone_append": false, 00:09:59.990 "compare": false, 00:09:59.990 "compare_and_write": false, 00:09:59.990 "abort": false, 00:09:59.990 "seek_hole": false, 00:09:59.990 "seek_data": false, 00:09:59.990 "copy": false, 00:09:59.990 "nvme_iov_md": false 00:09:59.990 }, 00:09:59.991 "memory_domains": [ 00:09:59.991 { 00:09:59.991 "dma_device_id": "system", 00:09:59.991 "dma_device_type": 1 00:09:59.991 }, 00:09:59.991 { 00:09:59.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.991 "dma_device_type": 2 00:09:59.991 }, 00:09:59.991 { 00:09:59.991 "dma_device_id": "system", 00:09:59.991 "dma_device_type": 1 00:09:59.991 }, 00:09:59.991 { 00:09:59.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.991 "dma_device_type": 2 00:09:59.991 }, 00:09:59.991 { 00:09:59.991 "dma_device_id": "system", 00:09:59.991 "dma_device_type": 1 00:09:59.991 }, 00:09:59.991 { 00:09:59.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.991 "dma_device_type": 2 00:09:59.991 } 00:09:59.991 ], 00:09:59.991 "driver_specific": { 00:09:59.991 "raid": { 00:09:59.991 "uuid": "82e4043e-dee9-4b59-bf73-d441e47c6bf9", 00:09:59.991 "strip_size_kb": 0, 00:09:59.991 "state": "online", 00:09:59.991 "raid_level": "raid1", 00:09:59.991 "superblock": true, 00:09:59.991 "num_base_bdevs": 3, 00:09:59.991 "num_base_bdevs_discovered": 3, 00:09:59.991 "num_base_bdevs_operational": 3, 00:09:59.991 "base_bdevs_list": [ 00:09:59.991 { 00:09:59.991 "name": "BaseBdev1", 00:09:59.991 "uuid": "c570cdbd-6835-4b42-b11a-838520948e0c", 00:09:59.991 "is_configured": true, 00:09:59.991 "data_offset": 2048, 00:09:59.991 "data_size": 63488 00:09:59.991 }, 00:09:59.991 { 00:09:59.991 "name": "BaseBdev2", 00:09:59.991 "uuid": "70c21f96-b286-4740-b467-387cc255a084", 00:09:59.991 "is_configured": true, 00:09:59.991 "data_offset": 2048, 00:09:59.991 "data_size": 63488 00:09:59.991 }, 00:09:59.991 { 00:09:59.991 "name": "BaseBdev3", 00:09:59.991 "uuid": "c55c6acd-a96d-4caa-9e59-5ebcbf290ff8", 00:09:59.991 "is_configured": true, 00:09:59.991 "data_offset": 2048, 00:09:59.991 "data_size": 63488 00:09:59.991 } 00:09:59.991 ] 00:09:59.991 } 00:09:59.991 } 00:09:59.991 }' 00:09:59.991 12:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:59.991 12:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:59.991 BaseBdev2 00:09:59.991 BaseBdev3' 00:09:59.991 12:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.991 12:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:59.991 12:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:59.991 12:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:59.991 12:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.991 12:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.991 12:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.991 12:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.991 12:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:59.991 12:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:59.991 12:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:59.991 12:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:59.991 12:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.991 12:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.991 12:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.991 12:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.991 12:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:59.991 12:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:59.991 12:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:59.991 12:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:59.991 12:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.991 12:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.991 12:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.991 12:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.251 12:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:00.251 12:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:00.251 12:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:00.251 12:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.251 12:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.251 [2024-11-19 12:02:03.380102] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:00.251 12:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.251 12:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:00.251 12:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:00.251 12:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:00.251 12:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:10:00.251 12:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:00.251 12:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:00.251 12:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.251 12:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:00.251 12:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:00.251 12:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:00.251 12:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:00.251 12:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.251 12:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.251 12:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.251 12:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.251 12:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.251 12:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.251 12:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.251 12:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.251 12:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.251 12:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.251 "name": "Existed_Raid", 00:10:00.251 "uuid": "82e4043e-dee9-4b59-bf73-d441e47c6bf9", 00:10:00.251 "strip_size_kb": 0, 00:10:00.251 "state": "online", 00:10:00.251 "raid_level": "raid1", 00:10:00.251 "superblock": true, 00:10:00.251 "num_base_bdevs": 3, 00:10:00.251 "num_base_bdevs_discovered": 2, 00:10:00.251 "num_base_bdevs_operational": 2, 00:10:00.251 "base_bdevs_list": [ 00:10:00.251 { 00:10:00.251 "name": null, 00:10:00.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.251 "is_configured": false, 00:10:00.251 "data_offset": 0, 00:10:00.251 "data_size": 63488 00:10:00.251 }, 00:10:00.251 { 00:10:00.251 "name": "BaseBdev2", 00:10:00.251 "uuid": "70c21f96-b286-4740-b467-387cc255a084", 00:10:00.251 "is_configured": true, 00:10:00.251 "data_offset": 2048, 00:10:00.251 "data_size": 63488 00:10:00.251 }, 00:10:00.251 { 00:10:00.251 "name": "BaseBdev3", 00:10:00.251 "uuid": "c55c6acd-a96d-4caa-9e59-5ebcbf290ff8", 00:10:00.251 "is_configured": true, 00:10:00.251 "data_offset": 2048, 00:10:00.251 "data_size": 63488 00:10:00.251 } 00:10:00.251 ] 00:10:00.251 }' 00:10:00.251 12:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.251 12:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.820 12:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:00.820 12:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:00.820 12:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.820 12:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.820 12:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.820 12:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:00.820 12:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.820 12:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:00.820 12:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:00.820 12:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:00.820 12:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.820 12:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.820 [2024-11-19 12:02:03.944167] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:00.820 12:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.820 12:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:00.820 12:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:00.820 12:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.820 12:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:00.820 12:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.820 12:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.820 12:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.820 12:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:00.820 12:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:00.820 12:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:00.820 12:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.820 12:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.820 [2024-11-19 12:02:04.102033] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:00.820 [2024-11-19 12:02:04.102140] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:01.082 [2024-11-19 12:02:04.197716] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:01.082 [2024-11-19 12:02:04.197778] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:01.082 [2024-11-19 12:02:04.197791] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:01.082 12:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.082 12:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:01.082 12:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:01.082 12:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.082 12:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.082 12:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.082 12:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:01.082 12:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.082 12:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:01.082 12:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:01.082 12:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:01.082 12:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:01.082 12:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:01.082 12:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:01.082 12:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.082 12:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.082 BaseBdev2 00:10:01.082 12:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.082 12:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:01.082 12:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:01.082 12:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:01.082 12:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:01.082 12:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:01.082 12:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:01.082 12:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:01.082 12:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.082 12:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.082 12:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.082 12:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:01.082 12:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.082 12:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.082 [ 00:10:01.082 { 00:10:01.082 "name": "BaseBdev2", 00:10:01.082 "aliases": [ 00:10:01.082 "a3804f25-dbdf-436d-bc2d-7c0b59ea5af3" 00:10:01.082 ], 00:10:01.082 "product_name": "Malloc disk", 00:10:01.082 "block_size": 512, 00:10:01.082 "num_blocks": 65536, 00:10:01.082 "uuid": "a3804f25-dbdf-436d-bc2d-7c0b59ea5af3", 00:10:01.082 "assigned_rate_limits": { 00:10:01.082 "rw_ios_per_sec": 0, 00:10:01.082 "rw_mbytes_per_sec": 0, 00:10:01.082 "r_mbytes_per_sec": 0, 00:10:01.082 "w_mbytes_per_sec": 0 00:10:01.082 }, 00:10:01.082 "claimed": false, 00:10:01.082 "zoned": false, 00:10:01.082 "supported_io_types": { 00:10:01.082 "read": true, 00:10:01.082 "write": true, 00:10:01.082 "unmap": true, 00:10:01.082 "flush": true, 00:10:01.082 "reset": true, 00:10:01.082 "nvme_admin": false, 00:10:01.082 "nvme_io": false, 00:10:01.082 "nvme_io_md": false, 00:10:01.082 "write_zeroes": true, 00:10:01.082 "zcopy": true, 00:10:01.082 "get_zone_info": false, 00:10:01.082 "zone_management": false, 00:10:01.082 "zone_append": false, 00:10:01.082 "compare": false, 00:10:01.082 "compare_and_write": false, 00:10:01.082 "abort": true, 00:10:01.082 "seek_hole": false, 00:10:01.082 "seek_data": false, 00:10:01.082 "copy": true, 00:10:01.082 "nvme_iov_md": false 00:10:01.082 }, 00:10:01.082 "memory_domains": [ 00:10:01.082 { 00:10:01.082 "dma_device_id": "system", 00:10:01.082 "dma_device_type": 1 00:10:01.082 }, 00:10:01.082 { 00:10:01.082 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.082 "dma_device_type": 2 00:10:01.082 } 00:10:01.082 ], 00:10:01.082 "driver_specific": {} 00:10:01.082 } 00:10:01.082 ] 00:10:01.082 12:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.082 12:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:01.082 12:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:01.082 12:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:01.082 12:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:01.082 12:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.082 12:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.082 BaseBdev3 00:10:01.082 12:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.082 12:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:01.082 12:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:01.082 12:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:01.082 12:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:01.082 12:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:01.082 12:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:01.082 12:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:01.083 12:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.083 12:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.083 12:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.083 12:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:01.083 12:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.083 12:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.083 [ 00:10:01.083 { 00:10:01.083 "name": "BaseBdev3", 00:10:01.083 "aliases": [ 00:10:01.083 "9a8e8ff1-ac9c-4e96-941c-1ed9ec146a88" 00:10:01.083 ], 00:10:01.083 "product_name": "Malloc disk", 00:10:01.083 "block_size": 512, 00:10:01.083 "num_blocks": 65536, 00:10:01.083 "uuid": "9a8e8ff1-ac9c-4e96-941c-1ed9ec146a88", 00:10:01.083 "assigned_rate_limits": { 00:10:01.083 "rw_ios_per_sec": 0, 00:10:01.083 "rw_mbytes_per_sec": 0, 00:10:01.083 "r_mbytes_per_sec": 0, 00:10:01.083 "w_mbytes_per_sec": 0 00:10:01.083 }, 00:10:01.083 "claimed": false, 00:10:01.083 "zoned": false, 00:10:01.083 "supported_io_types": { 00:10:01.083 "read": true, 00:10:01.083 "write": true, 00:10:01.083 "unmap": true, 00:10:01.083 "flush": true, 00:10:01.083 "reset": true, 00:10:01.083 "nvme_admin": false, 00:10:01.083 "nvme_io": false, 00:10:01.083 "nvme_io_md": false, 00:10:01.083 "write_zeroes": true, 00:10:01.083 "zcopy": true, 00:10:01.083 "get_zone_info": false, 00:10:01.083 "zone_management": false, 00:10:01.083 "zone_append": false, 00:10:01.083 "compare": false, 00:10:01.083 "compare_and_write": false, 00:10:01.083 "abort": true, 00:10:01.083 "seek_hole": false, 00:10:01.083 "seek_data": false, 00:10:01.083 "copy": true, 00:10:01.083 "nvme_iov_md": false 00:10:01.083 }, 00:10:01.083 "memory_domains": [ 00:10:01.083 { 00:10:01.083 "dma_device_id": "system", 00:10:01.083 "dma_device_type": 1 00:10:01.083 }, 00:10:01.083 { 00:10:01.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.083 "dma_device_type": 2 00:10:01.083 } 00:10:01.083 ], 00:10:01.083 "driver_specific": {} 00:10:01.083 } 00:10:01.083 ] 00:10:01.083 12:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.083 12:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:01.083 12:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:01.083 12:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:01.083 12:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:01.083 12:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.083 12:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.083 [2024-11-19 12:02:04.413317] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:01.083 [2024-11-19 12:02:04.413458] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:01.083 [2024-11-19 12:02:04.413499] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:01.083 [2024-11-19 12:02:04.415270] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:01.083 12:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.083 12:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:01.083 12:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.083 12:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:01.083 12:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:01.083 12:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:01.083 12:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:01.083 12:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.083 12:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.083 12:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.083 12:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.083 12:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.083 12:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.083 12:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.083 12:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.083 12:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.400 12:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.400 "name": "Existed_Raid", 00:10:01.400 "uuid": "ab397209-b60f-47c2-8363-4a4b3676c49d", 00:10:01.400 "strip_size_kb": 0, 00:10:01.400 "state": "configuring", 00:10:01.400 "raid_level": "raid1", 00:10:01.400 "superblock": true, 00:10:01.400 "num_base_bdevs": 3, 00:10:01.400 "num_base_bdevs_discovered": 2, 00:10:01.400 "num_base_bdevs_operational": 3, 00:10:01.400 "base_bdevs_list": [ 00:10:01.400 { 00:10:01.400 "name": "BaseBdev1", 00:10:01.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.400 "is_configured": false, 00:10:01.400 "data_offset": 0, 00:10:01.400 "data_size": 0 00:10:01.400 }, 00:10:01.400 { 00:10:01.400 "name": "BaseBdev2", 00:10:01.400 "uuid": "a3804f25-dbdf-436d-bc2d-7c0b59ea5af3", 00:10:01.400 "is_configured": true, 00:10:01.400 "data_offset": 2048, 00:10:01.400 "data_size": 63488 00:10:01.400 }, 00:10:01.400 { 00:10:01.400 "name": "BaseBdev3", 00:10:01.400 "uuid": "9a8e8ff1-ac9c-4e96-941c-1ed9ec146a88", 00:10:01.400 "is_configured": true, 00:10:01.400 "data_offset": 2048, 00:10:01.400 "data_size": 63488 00:10:01.400 } 00:10:01.400 ] 00:10:01.400 }' 00:10:01.400 12:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.400 12:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.661 12:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:01.661 12:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.661 12:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.661 [2024-11-19 12:02:04.852590] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:01.661 12:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.661 12:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:01.661 12:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.661 12:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:01.661 12:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:01.661 12:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:01.661 12:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:01.661 12:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.661 12:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.661 12:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.661 12:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.661 12:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.661 12:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.661 12:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.661 12:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.661 12:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.661 12:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.661 "name": "Existed_Raid", 00:10:01.661 "uuid": "ab397209-b60f-47c2-8363-4a4b3676c49d", 00:10:01.661 "strip_size_kb": 0, 00:10:01.661 "state": "configuring", 00:10:01.661 "raid_level": "raid1", 00:10:01.661 "superblock": true, 00:10:01.661 "num_base_bdevs": 3, 00:10:01.661 "num_base_bdevs_discovered": 1, 00:10:01.661 "num_base_bdevs_operational": 3, 00:10:01.661 "base_bdevs_list": [ 00:10:01.661 { 00:10:01.661 "name": "BaseBdev1", 00:10:01.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.661 "is_configured": false, 00:10:01.661 "data_offset": 0, 00:10:01.661 "data_size": 0 00:10:01.661 }, 00:10:01.661 { 00:10:01.661 "name": null, 00:10:01.661 "uuid": "a3804f25-dbdf-436d-bc2d-7c0b59ea5af3", 00:10:01.661 "is_configured": false, 00:10:01.661 "data_offset": 0, 00:10:01.661 "data_size": 63488 00:10:01.661 }, 00:10:01.661 { 00:10:01.661 "name": "BaseBdev3", 00:10:01.661 "uuid": "9a8e8ff1-ac9c-4e96-941c-1ed9ec146a88", 00:10:01.661 "is_configured": true, 00:10:01.661 "data_offset": 2048, 00:10:01.661 "data_size": 63488 00:10:01.661 } 00:10:01.661 ] 00:10:01.661 }' 00:10:01.661 12:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.661 12:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.921 12:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.921 12:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:01.921 12:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.921 12:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.180 12:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.180 12:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:02.180 12:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:02.180 12:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.180 12:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.180 [2024-11-19 12:02:05.372575] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:02.180 BaseBdev1 00:10:02.180 12:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.180 12:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:02.180 12:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:02.180 12:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:02.180 12:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:02.180 12:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:02.180 12:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:02.180 12:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:02.180 12:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.180 12:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.180 12:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.180 12:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:02.180 12:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.180 12:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.180 [ 00:10:02.180 { 00:10:02.181 "name": "BaseBdev1", 00:10:02.181 "aliases": [ 00:10:02.181 "d3a2e2e2-4a80-4c6f-9ebd-a49d3441a175" 00:10:02.181 ], 00:10:02.181 "product_name": "Malloc disk", 00:10:02.181 "block_size": 512, 00:10:02.181 "num_blocks": 65536, 00:10:02.181 "uuid": "d3a2e2e2-4a80-4c6f-9ebd-a49d3441a175", 00:10:02.181 "assigned_rate_limits": { 00:10:02.181 "rw_ios_per_sec": 0, 00:10:02.181 "rw_mbytes_per_sec": 0, 00:10:02.181 "r_mbytes_per_sec": 0, 00:10:02.181 "w_mbytes_per_sec": 0 00:10:02.181 }, 00:10:02.181 "claimed": true, 00:10:02.181 "claim_type": "exclusive_write", 00:10:02.181 "zoned": false, 00:10:02.181 "supported_io_types": { 00:10:02.181 "read": true, 00:10:02.181 "write": true, 00:10:02.181 "unmap": true, 00:10:02.181 "flush": true, 00:10:02.181 "reset": true, 00:10:02.181 "nvme_admin": false, 00:10:02.181 "nvme_io": false, 00:10:02.181 "nvme_io_md": false, 00:10:02.181 "write_zeroes": true, 00:10:02.181 "zcopy": true, 00:10:02.181 "get_zone_info": false, 00:10:02.181 "zone_management": false, 00:10:02.181 "zone_append": false, 00:10:02.181 "compare": false, 00:10:02.181 "compare_and_write": false, 00:10:02.181 "abort": true, 00:10:02.181 "seek_hole": false, 00:10:02.181 "seek_data": false, 00:10:02.181 "copy": true, 00:10:02.181 "nvme_iov_md": false 00:10:02.181 }, 00:10:02.181 "memory_domains": [ 00:10:02.181 { 00:10:02.181 "dma_device_id": "system", 00:10:02.181 "dma_device_type": 1 00:10:02.181 }, 00:10:02.181 { 00:10:02.181 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.181 "dma_device_type": 2 00:10:02.181 } 00:10:02.181 ], 00:10:02.181 "driver_specific": {} 00:10:02.181 } 00:10:02.181 ] 00:10:02.181 12:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.181 12:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:02.181 12:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:02.181 12:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.181 12:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.181 12:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:02.181 12:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:02.181 12:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:02.181 12:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.181 12:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.181 12:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.181 12:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.181 12:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.181 12:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.181 12:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.181 12:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.181 12:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.181 12:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.181 "name": "Existed_Raid", 00:10:02.181 "uuid": "ab397209-b60f-47c2-8363-4a4b3676c49d", 00:10:02.181 "strip_size_kb": 0, 00:10:02.181 "state": "configuring", 00:10:02.181 "raid_level": "raid1", 00:10:02.181 "superblock": true, 00:10:02.181 "num_base_bdevs": 3, 00:10:02.181 "num_base_bdevs_discovered": 2, 00:10:02.181 "num_base_bdevs_operational": 3, 00:10:02.181 "base_bdevs_list": [ 00:10:02.181 { 00:10:02.181 "name": "BaseBdev1", 00:10:02.181 "uuid": "d3a2e2e2-4a80-4c6f-9ebd-a49d3441a175", 00:10:02.181 "is_configured": true, 00:10:02.181 "data_offset": 2048, 00:10:02.181 "data_size": 63488 00:10:02.181 }, 00:10:02.181 { 00:10:02.181 "name": null, 00:10:02.181 "uuid": "a3804f25-dbdf-436d-bc2d-7c0b59ea5af3", 00:10:02.181 "is_configured": false, 00:10:02.181 "data_offset": 0, 00:10:02.181 "data_size": 63488 00:10:02.181 }, 00:10:02.181 { 00:10:02.181 "name": "BaseBdev3", 00:10:02.181 "uuid": "9a8e8ff1-ac9c-4e96-941c-1ed9ec146a88", 00:10:02.181 "is_configured": true, 00:10:02.181 "data_offset": 2048, 00:10:02.181 "data_size": 63488 00:10:02.181 } 00:10:02.181 ] 00:10:02.181 }' 00:10:02.181 12:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.181 12:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.751 12:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:02.751 12:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.751 12:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.751 12:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.751 12:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.751 12:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:02.751 12:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:02.751 12:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.751 12:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.751 [2024-11-19 12:02:05.895725] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:02.751 12:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.751 12:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:02.751 12:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.751 12:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.751 12:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:02.751 12:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:02.751 12:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:02.751 12:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.751 12:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.751 12:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.751 12:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.751 12:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.751 12:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.751 12:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.751 12:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.751 12:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.751 12:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.751 "name": "Existed_Raid", 00:10:02.751 "uuid": "ab397209-b60f-47c2-8363-4a4b3676c49d", 00:10:02.751 "strip_size_kb": 0, 00:10:02.751 "state": "configuring", 00:10:02.751 "raid_level": "raid1", 00:10:02.751 "superblock": true, 00:10:02.751 "num_base_bdevs": 3, 00:10:02.751 "num_base_bdevs_discovered": 1, 00:10:02.751 "num_base_bdevs_operational": 3, 00:10:02.751 "base_bdevs_list": [ 00:10:02.751 { 00:10:02.751 "name": "BaseBdev1", 00:10:02.751 "uuid": "d3a2e2e2-4a80-4c6f-9ebd-a49d3441a175", 00:10:02.751 "is_configured": true, 00:10:02.751 "data_offset": 2048, 00:10:02.751 "data_size": 63488 00:10:02.751 }, 00:10:02.751 { 00:10:02.751 "name": null, 00:10:02.751 "uuid": "a3804f25-dbdf-436d-bc2d-7c0b59ea5af3", 00:10:02.751 "is_configured": false, 00:10:02.751 "data_offset": 0, 00:10:02.751 "data_size": 63488 00:10:02.751 }, 00:10:02.751 { 00:10:02.751 "name": null, 00:10:02.751 "uuid": "9a8e8ff1-ac9c-4e96-941c-1ed9ec146a88", 00:10:02.751 "is_configured": false, 00:10:02.751 "data_offset": 0, 00:10:02.751 "data_size": 63488 00:10:02.751 } 00:10:02.751 ] 00:10:02.751 }' 00:10:02.751 12:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.751 12:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.011 12:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:03.011 12:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.011 12:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.011 12:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.011 12:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.011 12:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:03.011 12:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:03.011 12:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.011 12:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.011 [2024-11-19 12:02:06.371014] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:03.011 12:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.011 12:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:03.011 12:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.011 12:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.011 12:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:03.011 12:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:03.011 12:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:03.011 12:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.011 12:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.011 12:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.011 12:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.011 12:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.011 12:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.012 12:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.012 12:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.271 12:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.271 12:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.271 "name": "Existed_Raid", 00:10:03.272 "uuid": "ab397209-b60f-47c2-8363-4a4b3676c49d", 00:10:03.272 "strip_size_kb": 0, 00:10:03.272 "state": "configuring", 00:10:03.272 "raid_level": "raid1", 00:10:03.272 "superblock": true, 00:10:03.272 "num_base_bdevs": 3, 00:10:03.272 "num_base_bdevs_discovered": 2, 00:10:03.272 "num_base_bdevs_operational": 3, 00:10:03.272 "base_bdevs_list": [ 00:10:03.272 { 00:10:03.272 "name": "BaseBdev1", 00:10:03.272 "uuid": "d3a2e2e2-4a80-4c6f-9ebd-a49d3441a175", 00:10:03.272 "is_configured": true, 00:10:03.272 "data_offset": 2048, 00:10:03.272 "data_size": 63488 00:10:03.272 }, 00:10:03.272 { 00:10:03.272 "name": null, 00:10:03.272 "uuid": "a3804f25-dbdf-436d-bc2d-7c0b59ea5af3", 00:10:03.272 "is_configured": false, 00:10:03.272 "data_offset": 0, 00:10:03.272 "data_size": 63488 00:10:03.272 }, 00:10:03.272 { 00:10:03.272 "name": "BaseBdev3", 00:10:03.272 "uuid": "9a8e8ff1-ac9c-4e96-941c-1ed9ec146a88", 00:10:03.272 "is_configured": true, 00:10:03.272 "data_offset": 2048, 00:10:03.272 "data_size": 63488 00:10:03.272 } 00:10:03.272 ] 00:10:03.272 }' 00:10:03.272 12:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.272 12:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.531 12:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:03.531 12:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.531 12:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.531 12:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.531 12:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.531 12:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:03.531 12:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:03.531 12:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.531 12:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.531 [2024-11-19 12:02:06.846156] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:03.789 12:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.789 12:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:03.789 12:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.789 12:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.789 12:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:03.789 12:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:03.789 12:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:03.789 12:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.789 12:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.789 12:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.789 12:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.789 12:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.789 12:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.789 12:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.789 12:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.789 12:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.789 12:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.789 "name": "Existed_Raid", 00:10:03.789 "uuid": "ab397209-b60f-47c2-8363-4a4b3676c49d", 00:10:03.789 "strip_size_kb": 0, 00:10:03.789 "state": "configuring", 00:10:03.789 "raid_level": "raid1", 00:10:03.789 "superblock": true, 00:10:03.789 "num_base_bdevs": 3, 00:10:03.789 "num_base_bdevs_discovered": 1, 00:10:03.789 "num_base_bdevs_operational": 3, 00:10:03.789 "base_bdevs_list": [ 00:10:03.789 { 00:10:03.789 "name": null, 00:10:03.789 "uuid": "d3a2e2e2-4a80-4c6f-9ebd-a49d3441a175", 00:10:03.789 "is_configured": false, 00:10:03.789 "data_offset": 0, 00:10:03.789 "data_size": 63488 00:10:03.789 }, 00:10:03.789 { 00:10:03.789 "name": null, 00:10:03.789 "uuid": "a3804f25-dbdf-436d-bc2d-7c0b59ea5af3", 00:10:03.789 "is_configured": false, 00:10:03.789 "data_offset": 0, 00:10:03.789 "data_size": 63488 00:10:03.789 }, 00:10:03.789 { 00:10:03.789 "name": "BaseBdev3", 00:10:03.789 "uuid": "9a8e8ff1-ac9c-4e96-941c-1ed9ec146a88", 00:10:03.789 "is_configured": true, 00:10:03.789 "data_offset": 2048, 00:10:03.789 "data_size": 63488 00:10:03.789 } 00:10:03.789 ] 00:10:03.789 }' 00:10:03.789 12:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.789 12:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.048 12:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.048 12:02:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.048 12:02:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.048 12:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:04.048 12:02:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.048 12:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:04.048 12:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:04.048 12:02:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.048 12:02:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.307 [2024-11-19 12:02:07.423471] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:04.307 12:02:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.307 12:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:04.307 12:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.307 12:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:04.307 12:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:04.307 12:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:04.307 12:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:04.307 12:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.307 12:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.307 12:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.307 12:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.307 12:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.307 12:02:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.307 12:02:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.307 12:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.307 12:02:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.307 12:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.307 "name": "Existed_Raid", 00:10:04.307 "uuid": "ab397209-b60f-47c2-8363-4a4b3676c49d", 00:10:04.307 "strip_size_kb": 0, 00:10:04.307 "state": "configuring", 00:10:04.307 "raid_level": "raid1", 00:10:04.307 "superblock": true, 00:10:04.307 "num_base_bdevs": 3, 00:10:04.307 "num_base_bdevs_discovered": 2, 00:10:04.307 "num_base_bdevs_operational": 3, 00:10:04.307 "base_bdevs_list": [ 00:10:04.307 { 00:10:04.308 "name": null, 00:10:04.308 "uuid": "d3a2e2e2-4a80-4c6f-9ebd-a49d3441a175", 00:10:04.308 "is_configured": false, 00:10:04.308 "data_offset": 0, 00:10:04.308 "data_size": 63488 00:10:04.308 }, 00:10:04.308 { 00:10:04.308 "name": "BaseBdev2", 00:10:04.308 "uuid": "a3804f25-dbdf-436d-bc2d-7c0b59ea5af3", 00:10:04.308 "is_configured": true, 00:10:04.308 "data_offset": 2048, 00:10:04.308 "data_size": 63488 00:10:04.308 }, 00:10:04.308 { 00:10:04.308 "name": "BaseBdev3", 00:10:04.308 "uuid": "9a8e8ff1-ac9c-4e96-941c-1ed9ec146a88", 00:10:04.308 "is_configured": true, 00:10:04.308 "data_offset": 2048, 00:10:04.308 "data_size": 63488 00:10:04.308 } 00:10:04.308 ] 00:10:04.308 }' 00:10:04.308 12:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.308 12:02:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.567 12:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.567 12:02:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.567 12:02:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.567 12:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:04.567 12:02:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.567 12:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:04.567 12:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:04.567 12:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.567 12:02:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.567 12:02:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.567 12:02:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.567 12:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d3a2e2e2-4a80-4c6f-9ebd-a49d3441a175 00:10:04.567 12:02:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.567 12:02:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.567 [2024-11-19 12:02:07.931144] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:04.567 [2024-11-19 12:02:07.931459] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:04.567 [2024-11-19 12:02:07.931507] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:04.567 [2024-11-19 12:02:07.931760] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:04.567 [2024-11-19 12:02:07.931948] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:04.567 [2024-11-19 12:02:07.932005] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:04.567 NewBaseBdev 00:10:04.567 [2024-11-19 12:02:07.932168] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:04.567 12:02:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.567 12:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:04.567 12:02:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:04.567 12:02:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:04.567 12:02:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:04.567 12:02:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:04.567 12:02:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:04.567 12:02:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:04.567 12:02:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.567 12:02:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.826 12:02:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.826 12:02:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:04.826 12:02:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.826 12:02:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.826 [ 00:10:04.827 { 00:10:04.827 "name": "NewBaseBdev", 00:10:04.827 "aliases": [ 00:10:04.827 "d3a2e2e2-4a80-4c6f-9ebd-a49d3441a175" 00:10:04.827 ], 00:10:04.827 "product_name": "Malloc disk", 00:10:04.827 "block_size": 512, 00:10:04.827 "num_blocks": 65536, 00:10:04.827 "uuid": "d3a2e2e2-4a80-4c6f-9ebd-a49d3441a175", 00:10:04.827 "assigned_rate_limits": { 00:10:04.827 "rw_ios_per_sec": 0, 00:10:04.827 "rw_mbytes_per_sec": 0, 00:10:04.827 "r_mbytes_per_sec": 0, 00:10:04.827 "w_mbytes_per_sec": 0 00:10:04.827 }, 00:10:04.827 "claimed": true, 00:10:04.827 "claim_type": "exclusive_write", 00:10:04.827 "zoned": false, 00:10:04.827 "supported_io_types": { 00:10:04.827 "read": true, 00:10:04.827 "write": true, 00:10:04.827 "unmap": true, 00:10:04.827 "flush": true, 00:10:04.827 "reset": true, 00:10:04.827 "nvme_admin": false, 00:10:04.827 "nvme_io": false, 00:10:04.827 "nvme_io_md": false, 00:10:04.827 "write_zeroes": true, 00:10:04.827 "zcopy": true, 00:10:04.827 "get_zone_info": false, 00:10:04.827 "zone_management": false, 00:10:04.827 "zone_append": false, 00:10:04.827 "compare": false, 00:10:04.827 "compare_and_write": false, 00:10:04.827 "abort": true, 00:10:04.827 "seek_hole": false, 00:10:04.827 "seek_data": false, 00:10:04.827 "copy": true, 00:10:04.827 "nvme_iov_md": false 00:10:04.827 }, 00:10:04.827 "memory_domains": [ 00:10:04.827 { 00:10:04.827 "dma_device_id": "system", 00:10:04.827 "dma_device_type": 1 00:10:04.827 }, 00:10:04.827 { 00:10:04.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.827 "dma_device_type": 2 00:10:04.827 } 00:10:04.827 ], 00:10:04.827 "driver_specific": {} 00:10:04.827 } 00:10:04.827 ] 00:10:04.827 12:02:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.827 12:02:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:04.827 12:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:04.827 12:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.827 12:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:04.827 12:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:04.827 12:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:04.827 12:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:04.827 12:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.827 12:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.827 12:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.827 12:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.827 12:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.827 12:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.827 12:02:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.827 12:02:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.827 12:02:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.827 12:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.827 "name": "Existed_Raid", 00:10:04.827 "uuid": "ab397209-b60f-47c2-8363-4a4b3676c49d", 00:10:04.827 "strip_size_kb": 0, 00:10:04.827 "state": "online", 00:10:04.827 "raid_level": "raid1", 00:10:04.827 "superblock": true, 00:10:04.827 "num_base_bdevs": 3, 00:10:04.827 "num_base_bdevs_discovered": 3, 00:10:04.827 "num_base_bdevs_operational": 3, 00:10:04.827 "base_bdevs_list": [ 00:10:04.827 { 00:10:04.827 "name": "NewBaseBdev", 00:10:04.827 "uuid": "d3a2e2e2-4a80-4c6f-9ebd-a49d3441a175", 00:10:04.827 "is_configured": true, 00:10:04.827 "data_offset": 2048, 00:10:04.827 "data_size": 63488 00:10:04.827 }, 00:10:04.827 { 00:10:04.827 "name": "BaseBdev2", 00:10:04.827 "uuid": "a3804f25-dbdf-436d-bc2d-7c0b59ea5af3", 00:10:04.827 "is_configured": true, 00:10:04.827 "data_offset": 2048, 00:10:04.827 "data_size": 63488 00:10:04.827 }, 00:10:04.827 { 00:10:04.827 "name": "BaseBdev3", 00:10:04.827 "uuid": "9a8e8ff1-ac9c-4e96-941c-1ed9ec146a88", 00:10:04.827 "is_configured": true, 00:10:04.827 "data_offset": 2048, 00:10:04.827 "data_size": 63488 00:10:04.827 } 00:10:04.827 ] 00:10:04.827 }' 00:10:04.827 12:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.827 12:02:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.087 12:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:05.087 12:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:05.087 12:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:05.087 12:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:05.087 12:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:05.087 12:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:05.087 12:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:05.087 12:02:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.087 12:02:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.087 12:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:05.087 [2024-11-19 12:02:08.458606] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:05.346 12:02:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.346 12:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:05.346 "name": "Existed_Raid", 00:10:05.346 "aliases": [ 00:10:05.346 "ab397209-b60f-47c2-8363-4a4b3676c49d" 00:10:05.346 ], 00:10:05.346 "product_name": "Raid Volume", 00:10:05.346 "block_size": 512, 00:10:05.346 "num_blocks": 63488, 00:10:05.346 "uuid": "ab397209-b60f-47c2-8363-4a4b3676c49d", 00:10:05.346 "assigned_rate_limits": { 00:10:05.346 "rw_ios_per_sec": 0, 00:10:05.346 "rw_mbytes_per_sec": 0, 00:10:05.346 "r_mbytes_per_sec": 0, 00:10:05.346 "w_mbytes_per_sec": 0 00:10:05.346 }, 00:10:05.346 "claimed": false, 00:10:05.346 "zoned": false, 00:10:05.346 "supported_io_types": { 00:10:05.346 "read": true, 00:10:05.346 "write": true, 00:10:05.346 "unmap": false, 00:10:05.346 "flush": false, 00:10:05.346 "reset": true, 00:10:05.346 "nvme_admin": false, 00:10:05.346 "nvme_io": false, 00:10:05.346 "nvme_io_md": false, 00:10:05.346 "write_zeroes": true, 00:10:05.346 "zcopy": false, 00:10:05.346 "get_zone_info": false, 00:10:05.346 "zone_management": false, 00:10:05.346 "zone_append": false, 00:10:05.346 "compare": false, 00:10:05.346 "compare_and_write": false, 00:10:05.346 "abort": false, 00:10:05.346 "seek_hole": false, 00:10:05.346 "seek_data": false, 00:10:05.346 "copy": false, 00:10:05.346 "nvme_iov_md": false 00:10:05.346 }, 00:10:05.346 "memory_domains": [ 00:10:05.346 { 00:10:05.346 "dma_device_id": "system", 00:10:05.346 "dma_device_type": 1 00:10:05.346 }, 00:10:05.346 { 00:10:05.346 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.346 "dma_device_type": 2 00:10:05.346 }, 00:10:05.346 { 00:10:05.346 "dma_device_id": "system", 00:10:05.346 "dma_device_type": 1 00:10:05.346 }, 00:10:05.346 { 00:10:05.346 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.346 "dma_device_type": 2 00:10:05.346 }, 00:10:05.346 { 00:10:05.346 "dma_device_id": "system", 00:10:05.346 "dma_device_type": 1 00:10:05.346 }, 00:10:05.346 { 00:10:05.346 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.346 "dma_device_type": 2 00:10:05.346 } 00:10:05.346 ], 00:10:05.346 "driver_specific": { 00:10:05.346 "raid": { 00:10:05.346 "uuid": "ab397209-b60f-47c2-8363-4a4b3676c49d", 00:10:05.346 "strip_size_kb": 0, 00:10:05.346 "state": "online", 00:10:05.346 "raid_level": "raid1", 00:10:05.346 "superblock": true, 00:10:05.346 "num_base_bdevs": 3, 00:10:05.346 "num_base_bdevs_discovered": 3, 00:10:05.346 "num_base_bdevs_operational": 3, 00:10:05.346 "base_bdevs_list": [ 00:10:05.346 { 00:10:05.346 "name": "NewBaseBdev", 00:10:05.346 "uuid": "d3a2e2e2-4a80-4c6f-9ebd-a49d3441a175", 00:10:05.346 "is_configured": true, 00:10:05.346 "data_offset": 2048, 00:10:05.346 "data_size": 63488 00:10:05.346 }, 00:10:05.346 { 00:10:05.346 "name": "BaseBdev2", 00:10:05.346 "uuid": "a3804f25-dbdf-436d-bc2d-7c0b59ea5af3", 00:10:05.346 "is_configured": true, 00:10:05.346 "data_offset": 2048, 00:10:05.346 "data_size": 63488 00:10:05.346 }, 00:10:05.346 { 00:10:05.346 "name": "BaseBdev3", 00:10:05.346 "uuid": "9a8e8ff1-ac9c-4e96-941c-1ed9ec146a88", 00:10:05.346 "is_configured": true, 00:10:05.346 "data_offset": 2048, 00:10:05.346 "data_size": 63488 00:10:05.346 } 00:10:05.346 ] 00:10:05.346 } 00:10:05.346 } 00:10:05.346 }' 00:10:05.346 12:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:05.346 12:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:05.346 BaseBdev2 00:10:05.346 BaseBdev3' 00:10:05.346 12:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:05.346 12:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:05.346 12:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:05.346 12:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:05.346 12:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:05.346 12:02:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.346 12:02:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.346 12:02:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.346 12:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:05.346 12:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:05.346 12:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:05.346 12:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:05.346 12:02:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.346 12:02:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.346 12:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:05.346 12:02:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.346 12:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:05.346 12:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:05.346 12:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:05.346 12:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:05.346 12:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:05.346 12:02:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.346 12:02:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.346 12:02:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.606 12:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:05.606 12:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:05.606 12:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:05.606 12:02:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.606 12:02:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.606 [2024-11-19 12:02:08.733822] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:05.606 [2024-11-19 12:02:08.733941] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:05.606 [2024-11-19 12:02:08.734036] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:05.606 [2024-11-19 12:02:08.734333] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:05.606 [2024-11-19 12:02:08.734388] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:05.606 12:02:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.606 12:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68056 00:10:05.606 12:02:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 68056 ']' 00:10:05.606 12:02:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 68056 00:10:05.606 12:02:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:05.606 12:02:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:05.606 12:02:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68056 00:10:05.606 killing process with pid 68056 00:10:05.606 12:02:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:05.606 12:02:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:05.606 12:02:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68056' 00:10:05.606 12:02:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 68056 00:10:05.606 [2024-11-19 12:02:08.782078] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:05.606 12:02:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 68056 00:10:05.866 [2024-11-19 12:02:09.075466] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:06.801 12:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:06.801 00:10:06.801 real 0m10.380s 00:10:06.801 user 0m16.497s 00:10:06.801 sys 0m1.796s 00:10:06.801 12:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:06.801 12:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.801 ************************************ 00:10:06.801 END TEST raid_state_function_test_sb 00:10:06.801 ************************************ 00:10:07.061 12:02:10 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:10:07.061 12:02:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:07.061 12:02:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:07.061 12:02:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:07.061 ************************************ 00:10:07.061 START TEST raid_superblock_test 00:10:07.061 ************************************ 00:10:07.061 12:02:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:10:07.061 12:02:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:10:07.061 12:02:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:10:07.061 12:02:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:07.061 12:02:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:07.061 12:02:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:07.061 12:02:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:07.061 12:02:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:07.061 12:02:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:07.061 12:02:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:07.061 12:02:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:07.061 12:02:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:07.061 12:02:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:07.061 12:02:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:07.061 12:02:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:10:07.061 12:02:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:10:07.061 12:02:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68671 00:10:07.061 12:02:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:07.061 12:02:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68671 00:10:07.061 12:02:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 68671 ']' 00:10:07.061 12:02:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:07.061 12:02:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:07.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:07.061 12:02:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:07.061 12:02:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:07.061 12:02:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.061 [2024-11-19 12:02:10.330310] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:10:07.061 [2024-11-19 12:02:10.330470] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68671 ] 00:10:07.320 [2024-11-19 12:02:10.510911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.320 [2024-11-19 12:02:10.629842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.579 [2024-11-19 12:02:10.829376] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:07.579 [2024-11-19 12:02:10.829441] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:07.839 12:02:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:07.839 12:02:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:07.839 12:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:07.839 12:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:07.839 12:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:07.839 12:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:07.839 12:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:07.839 12:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:07.839 12:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:07.839 12:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:07.839 12:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:07.839 12:02:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.839 12:02:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.839 malloc1 00:10:07.839 12:02:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.839 12:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:07.839 12:02:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.839 12:02:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.099 [2024-11-19 12:02:11.216435] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:08.099 [2024-11-19 12:02:11.216589] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:08.099 [2024-11-19 12:02:11.216632] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:08.099 [2024-11-19 12:02:11.216661] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:08.099 [2024-11-19 12:02:11.218708] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:08.099 [2024-11-19 12:02:11.218778] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:08.099 pt1 00:10:08.099 12:02:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.099 12:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:08.099 12:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:08.099 12:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:08.099 12:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:08.099 12:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:08.099 12:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:08.099 12:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:08.099 12:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:08.099 12:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:08.099 12:02:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.099 12:02:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.099 malloc2 00:10:08.099 12:02:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.099 12:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:08.099 12:02:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.099 12:02:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.099 [2024-11-19 12:02:11.273818] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:08.099 [2024-11-19 12:02:11.273941] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:08.099 [2024-11-19 12:02:11.273980] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:08.099 [2024-11-19 12:02:11.274025] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:08.099 [2024-11-19 12:02:11.276037] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:08.099 [2024-11-19 12:02:11.276071] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:08.099 pt2 00:10:08.099 12:02:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.099 12:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:08.099 12:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:08.099 12:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:08.099 12:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:08.099 12:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:08.099 12:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:08.099 12:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:08.099 12:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:08.099 12:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:08.099 12:02:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.099 12:02:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.099 malloc3 00:10:08.099 12:02:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.099 12:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:08.099 12:02:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.099 12:02:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.099 [2024-11-19 12:02:11.339205] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:08.099 [2024-11-19 12:02:11.339331] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:08.099 [2024-11-19 12:02:11.339358] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:08.099 [2024-11-19 12:02:11.339368] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:08.099 [2024-11-19 12:02:11.341449] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:08.099 [2024-11-19 12:02:11.341487] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:08.099 pt3 00:10:08.099 12:02:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.099 12:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:08.099 12:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:08.099 12:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:10:08.099 12:02:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.099 12:02:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.099 [2024-11-19 12:02:11.351229] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:08.099 [2024-11-19 12:02:11.352932] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:08.099 [2024-11-19 12:02:11.353005] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:08.099 [2024-11-19 12:02:11.353162] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:08.099 [2024-11-19 12:02:11.353187] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:08.099 [2024-11-19 12:02:11.353414] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:08.099 [2024-11-19 12:02:11.353574] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:08.099 [2024-11-19 12:02:11.353586] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:08.099 [2024-11-19 12:02:11.353716] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:08.099 12:02:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.099 12:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:08.099 12:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:08.099 12:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:08.099 12:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:08.099 12:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:08.099 12:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:08.099 12:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.099 12:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.099 12:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.099 12:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.099 12:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.099 12:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:08.099 12:02:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.099 12:02:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.099 12:02:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.099 12:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.099 "name": "raid_bdev1", 00:10:08.099 "uuid": "35c32ac5-8353-4ad1-beba-3cd0bf1cca27", 00:10:08.099 "strip_size_kb": 0, 00:10:08.099 "state": "online", 00:10:08.099 "raid_level": "raid1", 00:10:08.099 "superblock": true, 00:10:08.099 "num_base_bdevs": 3, 00:10:08.099 "num_base_bdevs_discovered": 3, 00:10:08.099 "num_base_bdevs_operational": 3, 00:10:08.099 "base_bdevs_list": [ 00:10:08.099 { 00:10:08.099 "name": "pt1", 00:10:08.099 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:08.099 "is_configured": true, 00:10:08.099 "data_offset": 2048, 00:10:08.099 "data_size": 63488 00:10:08.099 }, 00:10:08.099 { 00:10:08.099 "name": "pt2", 00:10:08.099 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:08.099 "is_configured": true, 00:10:08.099 "data_offset": 2048, 00:10:08.099 "data_size": 63488 00:10:08.099 }, 00:10:08.099 { 00:10:08.099 "name": "pt3", 00:10:08.099 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:08.099 "is_configured": true, 00:10:08.099 "data_offset": 2048, 00:10:08.099 "data_size": 63488 00:10:08.099 } 00:10:08.099 ] 00:10:08.099 }' 00:10:08.099 12:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.099 12:02:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.358 12:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:08.358 12:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:08.358 12:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:08.358 12:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:08.359 12:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:08.359 12:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:08.359 12:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:08.359 12:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:08.359 12:02:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.359 12:02:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.644 [2024-11-19 12:02:11.734932] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:08.644 12:02:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.644 12:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:08.644 "name": "raid_bdev1", 00:10:08.644 "aliases": [ 00:10:08.644 "35c32ac5-8353-4ad1-beba-3cd0bf1cca27" 00:10:08.644 ], 00:10:08.644 "product_name": "Raid Volume", 00:10:08.644 "block_size": 512, 00:10:08.644 "num_blocks": 63488, 00:10:08.644 "uuid": "35c32ac5-8353-4ad1-beba-3cd0bf1cca27", 00:10:08.644 "assigned_rate_limits": { 00:10:08.644 "rw_ios_per_sec": 0, 00:10:08.644 "rw_mbytes_per_sec": 0, 00:10:08.644 "r_mbytes_per_sec": 0, 00:10:08.644 "w_mbytes_per_sec": 0 00:10:08.644 }, 00:10:08.644 "claimed": false, 00:10:08.644 "zoned": false, 00:10:08.644 "supported_io_types": { 00:10:08.644 "read": true, 00:10:08.644 "write": true, 00:10:08.644 "unmap": false, 00:10:08.644 "flush": false, 00:10:08.644 "reset": true, 00:10:08.644 "nvme_admin": false, 00:10:08.644 "nvme_io": false, 00:10:08.644 "nvme_io_md": false, 00:10:08.644 "write_zeroes": true, 00:10:08.644 "zcopy": false, 00:10:08.644 "get_zone_info": false, 00:10:08.644 "zone_management": false, 00:10:08.644 "zone_append": false, 00:10:08.644 "compare": false, 00:10:08.644 "compare_and_write": false, 00:10:08.644 "abort": false, 00:10:08.644 "seek_hole": false, 00:10:08.644 "seek_data": false, 00:10:08.644 "copy": false, 00:10:08.644 "nvme_iov_md": false 00:10:08.644 }, 00:10:08.644 "memory_domains": [ 00:10:08.644 { 00:10:08.644 "dma_device_id": "system", 00:10:08.644 "dma_device_type": 1 00:10:08.644 }, 00:10:08.644 { 00:10:08.644 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.644 "dma_device_type": 2 00:10:08.644 }, 00:10:08.644 { 00:10:08.644 "dma_device_id": "system", 00:10:08.644 "dma_device_type": 1 00:10:08.644 }, 00:10:08.644 { 00:10:08.644 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.644 "dma_device_type": 2 00:10:08.644 }, 00:10:08.644 { 00:10:08.644 "dma_device_id": "system", 00:10:08.644 "dma_device_type": 1 00:10:08.644 }, 00:10:08.644 { 00:10:08.644 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.644 "dma_device_type": 2 00:10:08.644 } 00:10:08.644 ], 00:10:08.644 "driver_specific": { 00:10:08.644 "raid": { 00:10:08.644 "uuid": "35c32ac5-8353-4ad1-beba-3cd0bf1cca27", 00:10:08.644 "strip_size_kb": 0, 00:10:08.644 "state": "online", 00:10:08.644 "raid_level": "raid1", 00:10:08.644 "superblock": true, 00:10:08.644 "num_base_bdevs": 3, 00:10:08.644 "num_base_bdevs_discovered": 3, 00:10:08.644 "num_base_bdevs_operational": 3, 00:10:08.644 "base_bdevs_list": [ 00:10:08.644 { 00:10:08.644 "name": "pt1", 00:10:08.644 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:08.644 "is_configured": true, 00:10:08.644 "data_offset": 2048, 00:10:08.644 "data_size": 63488 00:10:08.644 }, 00:10:08.644 { 00:10:08.644 "name": "pt2", 00:10:08.644 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:08.644 "is_configured": true, 00:10:08.644 "data_offset": 2048, 00:10:08.644 "data_size": 63488 00:10:08.644 }, 00:10:08.644 { 00:10:08.644 "name": "pt3", 00:10:08.644 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:08.644 "is_configured": true, 00:10:08.644 "data_offset": 2048, 00:10:08.644 "data_size": 63488 00:10:08.644 } 00:10:08.644 ] 00:10:08.644 } 00:10:08.645 } 00:10:08.645 }' 00:10:08.645 12:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:08.645 12:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:08.645 pt2 00:10:08.645 pt3' 00:10:08.645 12:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.645 12:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:08.645 12:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:08.645 12:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.645 12:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:08.645 12:02:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.645 12:02:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.645 12:02:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.645 12:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:08.645 12:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:08.645 12:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:08.645 12:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.645 12:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:08.645 12:02:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.645 12:02:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.645 12:02:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.645 12:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:08.645 12:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:08.645 12:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:08.645 12:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.645 12:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:08.645 12:02:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.645 12:02:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.645 12:02:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.645 12:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:08.645 12:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:08.645 12:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:08.645 12:02:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.645 12:02:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.645 12:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:08.645 [2024-11-19 12:02:12.002393] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:08.905 12:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.905 12:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=35c32ac5-8353-4ad1-beba-3cd0bf1cca27 00:10:08.905 12:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 35c32ac5-8353-4ad1-beba-3cd0bf1cca27 ']' 00:10:08.905 12:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:08.905 12:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.905 12:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.905 [2024-11-19 12:02:12.054061] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:08.905 [2024-11-19 12:02:12.054100] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:08.905 [2024-11-19 12:02:12.054175] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:08.905 [2024-11-19 12:02:12.054248] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:08.905 [2024-11-19 12:02:12.054259] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:08.905 12:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.905 12:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.905 12:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.905 12:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.905 12:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:08.905 12:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.905 12:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:08.905 12:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:08.905 12:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:08.905 12:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:08.905 12:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.905 12:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.905 12:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.905 12:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:08.905 12:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:08.905 12:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.905 12:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.905 12:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.905 12:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:08.905 12:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:08.905 12:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.905 12:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.905 12:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.905 12:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:08.905 12:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:08.905 12:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.905 12:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.905 12:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.905 12:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:08.905 12:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:08.905 12:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:08.905 12:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:08.905 12:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:08.905 12:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:08.905 12:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:08.905 12:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:08.905 12:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:08.905 12:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.905 12:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.905 [2024-11-19 12:02:12.205869] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:08.905 [2024-11-19 12:02:12.207884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:08.905 [2024-11-19 12:02:12.208042] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:08.905 [2024-11-19 12:02:12.208098] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:08.905 [2024-11-19 12:02:12.208152] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:08.905 [2024-11-19 12:02:12.208173] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:08.905 [2024-11-19 12:02:12.208189] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:08.905 [2024-11-19 12:02:12.208199] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:08.905 request: 00:10:08.905 { 00:10:08.905 "name": "raid_bdev1", 00:10:08.905 "raid_level": "raid1", 00:10:08.905 "base_bdevs": [ 00:10:08.905 "malloc1", 00:10:08.905 "malloc2", 00:10:08.905 "malloc3" 00:10:08.905 ], 00:10:08.905 "superblock": false, 00:10:08.905 "method": "bdev_raid_create", 00:10:08.905 "req_id": 1 00:10:08.905 } 00:10:08.905 Got JSON-RPC error response 00:10:08.905 response: 00:10:08.905 { 00:10:08.905 "code": -17, 00:10:08.905 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:08.905 } 00:10:08.905 12:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:08.905 12:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:08.905 12:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:08.905 12:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:08.905 12:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:08.905 12:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.905 12:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:08.905 12:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.905 12:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.905 12:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.905 12:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:08.905 12:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:08.905 12:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:08.905 12:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.905 12:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.905 [2024-11-19 12:02:12.273698] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:08.905 [2024-11-19 12:02:12.273847] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:08.905 [2024-11-19 12:02:12.273889] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:08.905 [2024-11-19 12:02:12.273917] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:08.905 [2024-11-19 12:02:12.276135] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:08.905 [2024-11-19 12:02:12.276214] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:08.905 [2024-11-19 12:02:12.276321] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:08.905 [2024-11-19 12:02:12.276396] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:08.905 pt1 00:10:09.164 12:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.164 12:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:09.164 12:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:09.164 12:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.164 12:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:09.164 12:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:09.164 12:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:09.164 12:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.164 12:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.164 12:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.164 12:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.164 12:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.164 12:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:09.164 12:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.164 12:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.164 12:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.164 12:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.164 "name": "raid_bdev1", 00:10:09.164 "uuid": "35c32ac5-8353-4ad1-beba-3cd0bf1cca27", 00:10:09.164 "strip_size_kb": 0, 00:10:09.164 "state": "configuring", 00:10:09.164 "raid_level": "raid1", 00:10:09.164 "superblock": true, 00:10:09.164 "num_base_bdevs": 3, 00:10:09.164 "num_base_bdevs_discovered": 1, 00:10:09.164 "num_base_bdevs_operational": 3, 00:10:09.164 "base_bdevs_list": [ 00:10:09.164 { 00:10:09.164 "name": "pt1", 00:10:09.164 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:09.164 "is_configured": true, 00:10:09.164 "data_offset": 2048, 00:10:09.164 "data_size": 63488 00:10:09.164 }, 00:10:09.164 { 00:10:09.164 "name": null, 00:10:09.164 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:09.164 "is_configured": false, 00:10:09.164 "data_offset": 2048, 00:10:09.164 "data_size": 63488 00:10:09.164 }, 00:10:09.164 { 00:10:09.164 "name": null, 00:10:09.164 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:09.164 "is_configured": false, 00:10:09.164 "data_offset": 2048, 00:10:09.164 "data_size": 63488 00:10:09.164 } 00:10:09.164 ] 00:10:09.164 }' 00:10:09.165 12:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.165 12:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.424 12:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:10:09.424 12:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:09.424 12:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.424 12:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.424 [2024-11-19 12:02:12.720979] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:09.424 [2024-11-19 12:02:12.721129] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:09.424 [2024-11-19 12:02:12.721164] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:09.424 [2024-11-19 12:02:12.721174] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:09.424 [2024-11-19 12:02:12.721627] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:09.424 [2024-11-19 12:02:12.721645] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:09.424 [2024-11-19 12:02:12.721732] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:09.424 [2024-11-19 12:02:12.721756] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:09.424 pt2 00:10:09.424 12:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.424 12:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:09.424 12:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.424 12:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.424 [2024-11-19 12:02:12.732929] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:09.424 12:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.424 12:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:09.424 12:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:09.424 12:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.424 12:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:09.424 12:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:09.424 12:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:09.424 12:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.424 12:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.424 12:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.424 12:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.424 12:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:09.424 12:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.424 12:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.424 12:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.424 12:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.424 12:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.424 "name": "raid_bdev1", 00:10:09.424 "uuid": "35c32ac5-8353-4ad1-beba-3cd0bf1cca27", 00:10:09.424 "strip_size_kb": 0, 00:10:09.424 "state": "configuring", 00:10:09.424 "raid_level": "raid1", 00:10:09.424 "superblock": true, 00:10:09.424 "num_base_bdevs": 3, 00:10:09.424 "num_base_bdevs_discovered": 1, 00:10:09.424 "num_base_bdevs_operational": 3, 00:10:09.424 "base_bdevs_list": [ 00:10:09.424 { 00:10:09.424 "name": "pt1", 00:10:09.424 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:09.424 "is_configured": true, 00:10:09.424 "data_offset": 2048, 00:10:09.424 "data_size": 63488 00:10:09.424 }, 00:10:09.424 { 00:10:09.424 "name": null, 00:10:09.424 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:09.424 "is_configured": false, 00:10:09.424 "data_offset": 0, 00:10:09.424 "data_size": 63488 00:10:09.424 }, 00:10:09.424 { 00:10:09.424 "name": null, 00:10:09.424 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:09.424 "is_configured": false, 00:10:09.424 "data_offset": 2048, 00:10:09.424 "data_size": 63488 00:10:09.424 } 00:10:09.424 ] 00:10:09.424 }' 00:10:09.424 12:02:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.424 12:02:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.994 12:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:09.994 12:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:09.994 12:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:09.994 12:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.994 12:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.994 [2024-11-19 12:02:13.156189] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:09.994 [2024-11-19 12:02:13.156346] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:09.994 [2024-11-19 12:02:13.156380] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:09.994 [2024-11-19 12:02:13.156410] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:09.994 [2024-11-19 12:02:13.156871] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:09.994 [2024-11-19 12:02:13.156933] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:09.994 [2024-11-19 12:02:13.157053] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:09.994 [2024-11-19 12:02:13.157126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:09.994 pt2 00:10:09.994 12:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.994 12:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:09.994 12:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:09.994 12:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:09.994 12:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.994 12:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.994 [2024-11-19 12:02:13.168137] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:09.994 [2024-11-19 12:02:13.168222] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:09.995 [2024-11-19 12:02:13.168257] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:09.995 [2024-11-19 12:02:13.168290] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:09.995 [2024-11-19 12:02:13.168648] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:09.995 [2024-11-19 12:02:13.168705] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:09.995 [2024-11-19 12:02:13.168785] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:09.995 [2024-11-19 12:02:13.168831] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:09.995 [2024-11-19 12:02:13.168975] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:09.995 [2024-11-19 12:02:13.169035] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:09.995 [2024-11-19 12:02:13.169297] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:09.995 [2024-11-19 12:02:13.169479] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:09.995 [2024-11-19 12:02:13.169519] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:09.995 [2024-11-19 12:02:13.169682] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:09.995 pt3 00:10:09.995 12:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.995 12:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:09.995 12:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:09.995 12:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:09.995 12:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:09.995 12:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:09.995 12:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:09.995 12:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:09.995 12:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:09.995 12:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.995 12:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.995 12:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.995 12:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.995 12:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:09.995 12:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.995 12:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.995 12:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.995 12:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.995 12:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.995 "name": "raid_bdev1", 00:10:09.995 "uuid": "35c32ac5-8353-4ad1-beba-3cd0bf1cca27", 00:10:09.995 "strip_size_kb": 0, 00:10:09.995 "state": "online", 00:10:09.995 "raid_level": "raid1", 00:10:09.995 "superblock": true, 00:10:09.995 "num_base_bdevs": 3, 00:10:09.995 "num_base_bdevs_discovered": 3, 00:10:09.995 "num_base_bdevs_operational": 3, 00:10:09.995 "base_bdevs_list": [ 00:10:09.995 { 00:10:09.995 "name": "pt1", 00:10:09.995 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:09.995 "is_configured": true, 00:10:09.995 "data_offset": 2048, 00:10:09.995 "data_size": 63488 00:10:09.995 }, 00:10:09.995 { 00:10:09.995 "name": "pt2", 00:10:09.995 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:09.995 "is_configured": true, 00:10:09.995 "data_offset": 2048, 00:10:09.995 "data_size": 63488 00:10:09.995 }, 00:10:09.995 { 00:10:09.995 "name": "pt3", 00:10:09.995 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:09.995 "is_configured": true, 00:10:09.995 "data_offset": 2048, 00:10:09.995 "data_size": 63488 00:10:09.995 } 00:10:09.995 ] 00:10:09.995 }' 00:10:09.995 12:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.995 12:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.254 12:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:10.254 12:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:10.254 12:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:10.254 12:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:10.254 12:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:10.254 12:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:10.254 12:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:10.254 12:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:10.254 12:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.254 12:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.254 [2024-11-19 12:02:13.619672] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:10.515 12:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.515 12:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:10.515 "name": "raid_bdev1", 00:10:10.515 "aliases": [ 00:10:10.515 "35c32ac5-8353-4ad1-beba-3cd0bf1cca27" 00:10:10.515 ], 00:10:10.515 "product_name": "Raid Volume", 00:10:10.515 "block_size": 512, 00:10:10.515 "num_blocks": 63488, 00:10:10.515 "uuid": "35c32ac5-8353-4ad1-beba-3cd0bf1cca27", 00:10:10.515 "assigned_rate_limits": { 00:10:10.515 "rw_ios_per_sec": 0, 00:10:10.515 "rw_mbytes_per_sec": 0, 00:10:10.515 "r_mbytes_per_sec": 0, 00:10:10.515 "w_mbytes_per_sec": 0 00:10:10.515 }, 00:10:10.515 "claimed": false, 00:10:10.515 "zoned": false, 00:10:10.515 "supported_io_types": { 00:10:10.515 "read": true, 00:10:10.515 "write": true, 00:10:10.515 "unmap": false, 00:10:10.515 "flush": false, 00:10:10.515 "reset": true, 00:10:10.515 "nvme_admin": false, 00:10:10.515 "nvme_io": false, 00:10:10.515 "nvme_io_md": false, 00:10:10.515 "write_zeroes": true, 00:10:10.515 "zcopy": false, 00:10:10.515 "get_zone_info": false, 00:10:10.515 "zone_management": false, 00:10:10.515 "zone_append": false, 00:10:10.515 "compare": false, 00:10:10.515 "compare_and_write": false, 00:10:10.515 "abort": false, 00:10:10.515 "seek_hole": false, 00:10:10.515 "seek_data": false, 00:10:10.515 "copy": false, 00:10:10.515 "nvme_iov_md": false 00:10:10.515 }, 00:10:10.515 "memory_domains": [ 00:10:10.515 { 00:10:10.515 "dma_device_id": "system", 00:10:10.515 "dma_device_type": 1 00:10:10.515 }, 00:10:10.515 { 00:10:10.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.515 "dma_device_type": 2 00:10:10.515 }, 00:10:10.515 { 00:10:10.515 "dma_device_id": "system", 00:10:10.515 "dma_device_type": 1 00:10:10.515 }, 00:10:10.515 { 00:10:10.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.515 "dma_device_type": 2 00:10:10.515 }, 00:10:10.515 { 00:10:10.515 "dma_device_id": "system", 00:10:10.515 "dma_device_type": 1 00:10:10.515 }, 00:10:10.515 { 00:10:10.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.515 "dma_device_type": 2 00:10:10.515 } 00:10:10.515 ], 00:10:10.515 "driver_specific": { 00:10:10.515 "raid": { 00:10:10.515 "uuid": "35c32ac5-8353-4ad1-beba-3cd0bf1cca27", 00:10:10.515 "strip_size_kb": 0, 00:10:10.515 "state": "online", 00:10:10.515 "raid_level": "raid1", 00:10:10.515 "superblock": true, 00:10:10.515 "num_base_bdevs": 3, 00:10:10.515 "num_base_bdevs_discovered": 3, 00:10:10.515 "num_base_bdevs_operational": 3, 00:10:10.515 "base_bdevs_list": [ 00:10:10.515 { 00:10:10.515 "name": "pt1", 00:10:10.515 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:10.515 "is_configured": true, 00:10:10.515 "data_offset": 2048, 00:10:10.515 "data_size": 63488 00:10:10.515 }, 00:10:10.515 { 00:10:10.515 "name": "pt2", 00:10:10.515 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:10.515 "is_configured": true, 00:10:10.515 "data_offset": 2048, 00:10:10.515 "data_size": 63488 00:10:10.515 }, 00:10:10.515 { 00:10:10.515 "name": "pt3", 00:10:10.515 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:10.515 "is_configured": true, 00:10:10.515 "data_offset": 2048, 00:10:10.515 "data_size": 63488 00:10:10.515 } 00:10:10.515 ] 00:10:10.515 } 00:10:10.515 } 00:10:10.515 }' 00:10:10.515 12:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:10.515 12:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:10.515 pt2 00:10:10.515 pt3' 00:10:10.515 12:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.515 12:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:10.515 12:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:10.515 12:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.515 12:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:10.515 12:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.515 12:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.515 12:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.515 12:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:10.515 12:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:10.515 12:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:10.515 12:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:10.515 12:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.515 12:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.515 12:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.515 12:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.515 12:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:10.515 12:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:10.515 12:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:10.515 12:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:10.515 12:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.515 12:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.515 12:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.515 12:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.515 12:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:10.515 12:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:10.515 12:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:10.515 12:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.515 12:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.515 12:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:10.515 [2024-11-19 12:02:13.871359] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:10.515 12:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.776 12:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 35c32ac5-8353-4ad1-beba-3cd0bf1cca27 '!=' 35c32ac5-8353-4ad1-beba-3cd0bf1cca27 ']' 00:10:10.776 12:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:10:10.776 12:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:10.776 12:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:10.776 12:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:10:10.776 12:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.776 12:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.776 [2024-11-19 12:02:13.923104] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:10:10.776 12:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.776 12:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:10.776 12:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:10.776 12:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:10.776 12:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:10.776 12:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:10.776 12:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:10.776 12:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.776 12:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.776 12:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.776 12:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.776 12:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:10.776 12:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.776 12:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.776 12:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.776 12:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.776 12:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.776 "name": "raid_bdev1", 00:10:10.776 "uuid": "35c32ac5-8353-4ad1-beba-3cd0bf1cca27", 00:10:10.776 "strip_size_kb": 0, 00:10:10.776 "state": "online", 00:10:10.776 "raid_level": "raid1", 00:10:10.776 "superblock": true, 00:10:10.776 "num_base_bdevs": 3, 00:10:10.776 "num_base_bdevs_discovered": 2, 00:10:10.776 "num_base_bdevs_operational": 2, 00:10:10.776 "base_bdevs_list": [ 00:10:10.776 { 00:10:10.776 "name": null, 00:10:10.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.776 "is_configured": false, 00:10:10.776 "data_offset": 0, 00:10:10.776 "data_size": 63488 00:10:10.776 }, 00:10:10.776 { 00:10:10.776 "name": "pt2", 00:10:10.776 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:10.776 "is_configured": true, 00:10:10.776 "data_offset": 2048, 00:10:10.776 "data_size": 63488 00:10:10.776 }, 00:10:10.776 { 00:10:10.776 "name": "pt3", 00:10:10.776 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:10.776 "is_configured": true, 00:10:10.776 "data_offset": 2048, 00:10:10.776 "data_size": 63488 00:10:10.776 } 00:10:10.776 ] 00:10:10.776 }' 00:10:10.776 12:02:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.776 12:02:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.037 12:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:11.037 12:02:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.037 12:02:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.037 [2024-11-19 12:02:14.354276] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:11.037 [2024-11-19 12:02:14.354318] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:11.037 [2024-11-19 12:02:14.354393] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:11.037 [2024-11-19 12:02:14.354450] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:11.037 [2024-11-19 12:02:14.354464] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:11.037 12:02:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.037 12:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:10:11.037 12:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.037 12:02:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.037 12:02:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.037 12:02:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.037 12:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:10:11.037 12:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:10:11.037 12:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:10:11.037 12:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:11.037 12:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:10:11.037 12:02:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.037 12:02:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.037 12:02:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.037 12:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:11.037 12:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:11.037 12:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:10:11.037 12:02:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.037 12:02:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.297 12:02:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.297 12:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:11.297 12:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:11.297 12:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:10:11.297 12:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:11.297 12:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:11.297 12:02:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.297 12:02:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.297 [2024-11-19 12:02:14.422104] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:11.297 [2024-11-19 12:02:14.422169] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:11.297 [2024-11-19 12:02:14.422186] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:10:11.297 [2024-11-19 12:02:14.422195] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:11.297 [2024-11-19 12:02:14.424392] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:11.297 [2024-11-19 12:02:14.424432] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:11.297 [2024-11-19 12:02:14.424505] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:11.297 [2024-11-19 12:02:14.424555] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:11.297 pt2 00:10:11.297 12:02:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.297 12:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:11.297 12:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:11.297 12:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:11.297 12:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:11.297 12:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:11.298 12:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:11.298 12:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.298 12:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.298 12:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.298 12:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.298 12:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.298 12:02:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.298 12:02:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.298 12:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:11.298 12:02:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.298 12:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.298 "name": "raid_bdev1", 00:10:11.298 "uuid": "35c32ac5-8353-4ad1-beba-3cd0bf1cca27", 00:10:11.298 "strip_size_kb": 0, 00:10:11.298 "state": "configuring", 00:10:11.298 "raid_level": "raid1", 00:10:11.298 "superblock": true, 00:10:11.298 "num_base_bdevs": 3, 00:10:11.298 "num_base_bdevs_discovered": 1, 00:10:11.298 "num_base_bdevs_operational": 2, 00:10:11.298 "base_bdevs_list": [ 00:10:11.298 { 00:10:11.298 "name": null, 00:10:11.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.298 "is_configured": false, 00:10:11.298 "data_offset": 2048, 00:10:11.298 "data_size": 63488 00:10:11.298 }, 00:10:11.298 { 00:10:11.298 "name": "pt2", 00:10:11.298 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:11.298 "is_configured": true, 00:10:11.298 "data_offset": 2048, 00:10:11.298 "data_size": 63488 00:10:11.298 }, 00:10:11.298 { 00:10:11.298 "name": null, 00:10:11.298 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:11.298 "is_configured": false, 00:10:11.298 "data_offset": 2048, 00:10:11.298 "data_size": 63488 00:10:11.298 } 00:10:11.298 ] 00:10:11.298 }' 00:10:11.298 12:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.298 12:02:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.558 12:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:10:11.558 12:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:11.558 12:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:10:11.558 12:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:11.558 12:02:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.558 12:02:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.558 [2024-11-19 12:02:14.841438] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:11.558 [2024-11-19 12:02:14.841511] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:11.558 [2024-11-19 12:02:14.841535] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:11.558 [2024-11-19 12:02:14.841551] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:11.558 [2024-11-19 12:02:14.842016] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:11.558 [2024-11-19 12:02:14.842043] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:11.558 [2024-11-19 12:02:14.842136] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:11.558 [2024-11-19 12:02:14.842171] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:11.558 [2024-11-19 12:02:14.842281] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:11.558 [2024-11-19 12:02:14.842298] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:11.558 [2024-11-19 12:02:14.842538] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:11.558 [2024-11-19 12:02:14.842686] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:11.558 [2024-11-19 12:02:14.842700] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:11.558 [2024-11-19 12:02:14.842836] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:11.558 pt3 00:10:11.558 12:02:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.558 12:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:11.558 12:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:11.558 12:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:11.558 12:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:11.558 12:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:11.558 12:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:11.558 12:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.558 12:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.558 12:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.558 12:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.558 12:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.558 12:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:11.558 12:02:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.558 12:02:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.558 12:02:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.558 12:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.558 "name": "raid_bdev1", 00:10:11.558 "uuid": "35c32ac5-8353-4ad1-beba-3cd0bf1cca27", 00:10:11.558 "strip_size_kb": 0, 00:10:11.558 "state": "online", 00:10:11.558 "raid_level": "raid1", 00:10:11.558 "superblock": true, 00:10:11.558 "num_base_bdevs": 3, 00:10:11.558 "num_base_bdevs_discovered": 2, 00:10:11.558 "num_base_bdevs_operational": 2, 00:10:11.558 "base_bdevs_list": [ 00:10:11.558 { 00:10:11.558 "name": null, 00:10:11.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.558 "is_configured": false, 00:10:11.558 "data_offset": 2048, 00:10:11.558 "data_size": 63488 00:10:11.558 }, 00:10:11.558 { 00:10:11.558 "name": "pt2", 00:10:11.558 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:11.558 "is_configured": true, 00:10:11.558 "data_offset": 2048, 00:10:11.558 "data_size": 63488 00:10:11.558 }, 00:10:11.558 { 00:10:11.558 "name": "pt3", 00:10:11.558 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:11.558 "is_configured": true, 00:10:11.558 "data_offset": 2048, 00:10:11.558 "data_size": 63488 00:10:11.558 } 00:10:11.558 ] 00:10:11.558 }' 00:10:11.558 12:02:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.558 12:02:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.128 12:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:12.128 12:02:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.128 12:02:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.128 [2024-11-19 12:02:15.328587] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:12.128 [2024-11-19 12:02:15.328626] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:12.128 [2024-11-19 12:02:15.328717] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:12.128 [2024-11-19 12:02:15.328774] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:12.128 [2024-11-19 12:02:15.328790] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:12.128 12:02:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.128 12:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:10:12.129 12:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.129 12:02:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.129 12:02:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.129 12:02:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.129 12:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:10:12.129 12:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:10:12.129 12:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:10:12.129 12:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:10:12.129 12:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:10:12.129 12:02:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.129 12:02:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.129 12:02:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.129 12:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:12.129 12:02:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.129 12:02:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.129 [2024-11-19 12:02:15.388473] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:12.129 [2024-11-19 12:02:15.388533] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:12.129 [2024-11-19 12:02:15.388555] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:12.129 [2024-11-19 12:02:15.388565] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:12.129 [2024-11-19 12:02:15.390696] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:12.129 [2024-11-19 12:02:15.390732] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:12.129 [2024-11-19 12:02:15.390806] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:12.129 [2024-11-19 12:02:15.390856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:12.129 [2024-11-19 12:02:15.390983] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:10:12.129 [2024-11-19 12:02:15.391011] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:12.129 [2024-11-19 12:02:15.391038] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:10:12.129 [2024-11-19 12:02:15.391093] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:12.129 pt1 00:10:12.129 12:02:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.129 12:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:10:12.129 12:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:12.129 12:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:12.129 12:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.129 12:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:12.129 12:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:12.129 12:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:12.129 12:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.129 12:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.129 12:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.129 12:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.129 12:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.129 12:02:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.129 12:02:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.129 12:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:12.129 12:02:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.129 12:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.129 "name": "raid_bdev1", 00:10:12.129 "uuid": "35c32ac5-8353-4ad1-beba-3cd0bf1cca27", 00:10:12.129 "strip_size_kb": 0, 00:10:12.129 "state": "configuring", 00:10:12.129 "raid_level": "raid1", 00:10:12.129 "superblock": true, 00:10:12.129 "num_base_bdevs": 3, 00:10:12.129 "num_base_bdevs_discovered": 1, 00:10:12.129 "num_base_bdevs_operational": 2, 00:10:12.129 "base_bdevs_list": [ 00:10:12.129 { 00:10:12.129 "name": null, 00:10:12.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.129 "is_configured": false, 00:10:12.129 "data_offset": 2048, 00:10:12.129 "data_size": 63488 00:10:12.129 }, 00:10:12.129 { 00:10:12.129 "name": "pt2", 00:10:12.129 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:12.129 "is_configured": true, 00:10:12.129 "data_offset": 2048, 00:10:12.129 "data_size": 63488 00:10:12.129 }, 00:10:12.129 { 00:10:12.129 "name": null, 00:10:12.129 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:12.129 "is_configured": false, 00:10:12.129 "data_offset": 2048, 00:10:12.129 "data_size": 63488 00:10:12.129 } 00:10:12.129 ] 00:10:12.129 }' 00:10:12.129 12:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.129 12:02:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.699 12:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:10:12.699 12:02:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.699 12:02:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.699 12:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:12.699 12:02:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.699 12:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:10:12.699 12:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:12.699 12:02:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.699 12:02:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.699 [2024-11-19 12:02:15.899618] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:12.699 [2024-11-19 12:02:15.899683] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:12.699 [2024-11-19 12:02:15.899704] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:10:12.699 [2024-11-19 12:02:15.899714] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:12.699 [2024-11-19 12:02:15.900182] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:12.699 [2024-11-19 12:02:15.900220] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:12.699 [2024-11-19 12:02:15.900304] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:12.699 [2024-11-19 12:02:15.900353] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:12.699 [2024-11-19 12:02:15.900473] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:10:12.699 [2024-11-19 12:02:15.900487] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:12.699 [2024-11-19 12:02:15.900726] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:10:12.699 [2024-11-19 12:02:15.900892] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:10:12.699 [2024-11-19 12:02:15.900910] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:10:12.699 [2024-11-19 12:02:15.901056] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:12.699 pt3 00:10:12.699 12:02:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.699 12:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:12.699 12:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:12.699 12:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:12.699 12:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:12.699 12:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:12.699 12:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:12.699 12:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.699 12:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.699 12:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.699 12:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.699 12:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:12.699 12:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.699 12:02:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.699 12:02:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.699 12:02:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.699 12:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.699 "name": "raid_bdev1", 00:10:12.699 "uuid": "35c32ac5-8353-4ad1-beba-3cd0bf1cca27", 00:10:12.699 "strip_size_kb": 0, 00:10:12.699 "state": "online", 00:10:12.699 "raid_level": "raid1", 00:10:12.699 "superblock": true, 00:10:12.699 "num_base_bdevs": 3, 00:10:12.699 "num_base_bdevs_discovered": 2, 00:10:12.699 "num_base_bdevs_operational": 2, 00:10:12.699 "base_bdevs_list": [ 00:10:12.699 { 00:10:12.699 "name": null, 00:10:12.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.699 "is_configured": false, 00:10:12.699 "data_offset": 2048, 00:10:12.699 "data_size": 63488 00:10:12.699 }, 00:10:12.699 { 00:10:12.699 "name": "pt2", 00:10:12.699 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:12.699 "is_configured": true, 00:10:12.699 "data_offset": 2048, 00:10:12.699 "data_size": 63488 00:10:12.699 }, 00:10:12.699 { 00:10:12.699 "name": "pt3", 00:10:12.699 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:12.699 "is_configured": true, 00:10:12.699 "data_offset": 2048, 00:10:12.699 "data_size": 63488 00:10:12.699 } 00:10:12.699 ] 00:10:12.700 }' 00:10:12.700 12:02:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.700 12:02:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.960 12:02:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:10:12.960 12:02:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:12.960 12:02:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.960 12:02:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.220 12:02:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.220 12:02:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:10:13.220 12:02:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:13.220 12:02:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:10:13.220 12:02:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.220 12:02:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.220 [2024-11-19 12:02:16.387205] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:13.220 12:02:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.220 12:02:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 35c32ac5-8353-4ad1-beba-3cd0bf1cca27 '!=' 35c32ac5-8353-4ad1-beba-3cd0bf1cca27 ']' 00:10:13.220 12:02:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68671 00:10:13.220 12:02:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 68671 ']' 00:10:13.220 12:02:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 68671 00:10:13.220 12:02:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:13.220 12:02:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:13.220 12:02:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68671 00:10:13.220 12:02:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:13.220 12:02:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:13.220 killing process with pid 68671 00:10:13.220 12:02:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68671' 00:10:13.220 12:02:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 68671 00:10:13.220 [2024-11-19 12:02:16.456976] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:13.220 [2024-11-19 12:02:16.457089] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:13.220 [2024-11-19 12:02:16.457149] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:13.220 [2024-11-19 12:02:16.457162] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:10:13.220 12:02:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 68671 00:10:13.478 [2024-11-19 12:02:16.754820] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:14.893 12:02:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:14.893 00:10:14.893 real 0m7.611s 00:10:14.893 user 0m11.860s 00:10:14.893 sys 0m1.418s 00:10:14.893 12:02:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:14.893 12:02:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.893 ************************************ 00:10:14.893 END TEST raid_superblock_test 00:10:14.893 ************************************ 00:10:14.893 12:02:17 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:10:14.893 12:02:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:14.893 12:02:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:14.893 12:02:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:14.893 ************************************ 00:10:14.893 START TEST raid_read_error_test 00:10:14.893 ************************************ 00:10:14.893 12:02:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:10:14.893 12:02:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:14.893 12:02:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:14.893 12:02:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:14.893 12:02:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:14.893 12:02:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:14.893 12:02:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:14.893 12:02:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:14.893 12:02:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:14.893 12:02:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:14.893 12:02:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:14.893 12:02:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:14.893 12:02:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:14.893 12:02:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:14.893 12:02:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:14.893 12:02:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:14.893 12:02:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:14.893 12:02:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:14.893 12:02:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:14.893 12:02:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:14.893 12:02:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:14.893 12:02:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:14.893 12:02:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:14.893 12:02:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:14.893 12:02:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:14.893 12:02:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.IzqnOJ5CKZ 00:10:14.893 12:02:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69117 00:10:14.893 12:02:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69117 00:10:14.893 12:02:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:14.893 12:02:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 69117 ']' 00:10:14.893 12:02:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:14.893 12:02:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:14.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:14.893 12:02:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:14.893 12:02:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:14.893 12:02:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.893 [2024-11-19 12:02:18.037476] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:10:14.893 [2024-11-19 12:02:18.037627] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69117 ] 00:10:14.893 [2024-11-19 12:02:18.220664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.153 [2024-11-19 12:02:18.340455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.412 [2024-11-19 12:02:18.539473] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:15.412 [2024-11-19 12:02:18.539547] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:15.673 12:02:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:15.673 12:02:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:15.673 12:02:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:15.673 12:02:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:15.673 12:02:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.673 12:02:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.673 BaseBdev1_malloc 00:10:15.673 12:02:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.673 12:02:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:15.673 12:02:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.673 12:02:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.673 true 00:10:15.673 12:02:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.673 12:02:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:15.673 12:02:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.673 12:02:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.673 [2024-11-19 12:02:18.962143] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:15.673 [2024-11-19 12:02:18.962212] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:15.673 [2024-11-19 12:02:18.962233] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:15.673 [2024-11-19 12:02:18.962243] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:15.673 [2024-11-19 12:02:18.964309] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:15.673 [2024-11-19 12:02:18.964350] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:15.673 BaseBdev1 00:10:15.673 12:02:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.673 12:02:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:15.673 12:02:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:15.673 12:02:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.673 12:02:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.673 BaseBdev2_malloc 00:10:15.673 12:02:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.673 12:02:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:15.673 12:02:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.673 12:02:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.673 true 00:10:15.673 12:02:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.673 12:02:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:15.673 12:02:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.673 12:02:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.673 [2024-11-19 12:02:19.030218] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:15.673 [2024-11-19 12:02:19.030281] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:15.673 [2024-11-19 12:02:19.030298] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:15.673 [2024-11-19 12:02:19.030308] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:15.673 [2024-11-19 12:02:19.032318] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:15.673 [2024-11-19 12:02:19.032361] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:15.673 BaseBdev2 00:10:15.673 12:02:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.673 12:02:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:15.673 12:02:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:15.673 12:02:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.673 12:02:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.933 BaseBdev3_malloc 00:10:15.933 12:02:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.933 12:02:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:15.933 12:02:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.933 12:02:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.933 true 00:10:15.933 12:02:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.933 12:02:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:15.933 12:02:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.933 12:02:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.933 [2024-11-19 12:02:19.109482] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:15.933 [2024-11-19 12:02:19.109543] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:15.933 [2024-11-19 12:02:19.109558] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:15.933 [2024-11-19 12:02:19.109567] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:15.933 [2024-11-19 12:02:19.111503] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:15.933 [2024-11-19 12:02:19.111543] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:15.933 BaseBdev3 00:10:15.933 12:02:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.933 12:02:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:15.933 12:02:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.933 12:02:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.933 [2024-11-19 12:02:19.121522] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:15.933 [2024-11-19 12:02:19.123184] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:15.933 [2024-11-19 12:02:19.123256] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:15.933 [2024-11-19 12:02:19.123440] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:15.933 [2024-11-19 12:02:19.123453] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:15.933 [2024-11-19 12:02:19.123676] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:15.933 [2024-11-19 12:02:19.123835] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:15.933 [2024-11-19 12:02:19.123861] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:15.933 [2024-11-19 12:02:19.124021] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:15.934 12:02:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.934 12:02:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:15.934 12:02:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:15.934 12:02:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:15.934 12:02:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:15.934 12:02:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:15.934 12:02:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:15.934 12:02:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.934 12:02:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.934 12:02:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.934 12:02:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.934 12:02:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.934 12:02:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:15.934 12:02:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.934 12:02:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.934 12:02:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.934 12:02:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.934 "name": "raid_bdev1", 00:10:15.934 "uuid": "73c8c96d-4024-454f-84ee-a56cc6922712", 00:10:15.934 "strip_size_kb": 0, 00:10:15.934 "state": "online", 00:10:15.934 "raid_level": "raid1", 00:10:15.934 "superblock": true, 00:10:15.934 "num_base_bdevs": 3, 00:10:15.934 "num_base_bdevs_discovered": 3, 00:10:15.934 "num_base_bdevs_operational": 3, 00:10:15.934 "base_bdevs_list": [ 00:10:15.934 { 00:10:15.934 "name": "BaseBdev1", 00:10:15.934 "uuid": "07f1bff2-f554-5b82-ab8f-a2ba972e029c", 00:10:15.934 "is_configured": true, 00:10:15.934 "data_offset": 2048, 00:10:15.934 "data_size": 63488 00:10:15.934 }, 00:10:15.934 { 00:10:15.934 "name": "BaseBdev2", 00:10:15.934 "uuid": "47270511-f777-577e-9d38-868e019c2231", 00:10:15.934 "is_configured": true, 00:10:15.934 "data_offset": 2048, 00:10:15.934 "data_size": 63488 00:10:15.934 }, 00:10:15.934 { 00:10:15.934 "name": "BaseBdev3", 00:10:15.934 "uuid": "c47501c3-96a4-5df2-9bfe-260fe7342252", 00:10:15.934 "is_configured": true, 00:10:15.934 "data_offset": 2048, 00:10:15.934 "data_size": 63488 00:10:15.934 } 00:10:15.934 ] 00:10:15.934 }' 00:10:15.934 12:02:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.934 12:02:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.194 12:02:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:16.194 12:02:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:16.454 [2024-11-19 12:02:19.649921] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:17.394 12:02:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:17.394 12:02:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.394 12:02:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.394 12:02:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.394 12:02:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:17.394 12:02:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:17.394 12:02:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:10:17.394 12:02:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:17.394 12:02:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:17.394 12:02:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:17.394 12:02:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:17.394 12:02:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:17.394 12:02:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:17.394 12:02:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:17.394 12:02:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.394 12:02:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.394 12:02:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.394 12:02:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.394 12:02:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.394 12:02:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:17.394 12:02:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.394 12:02:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.394 12:02:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.394 12:02:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.394 "name": "raid_bdev1", 00:10:17.394 "uuid": "73c8c96d-4024-454f-84ee-a56cc6922712", 00:10:17.394 "strip_size_kb": 0, 00:10:17.394 "state": "online", 00:10:17.394 "raid_level": "raid1", 00:10:17.394 "superblock": true, 00:10:17.394 "num_base_bdevs": 3, 00:10:17.394 "num_base_bdevs_discovered": 3, 00:10:17.394 "num_base_bdevs_operational": 3, 00:10:17.394 "base_bdevs_list": [ 00:10:17.394 { 00:10:17.394 "name": "BaseBdev1", 00:10:17.394 "uuid": "07f1bff2-f554-5b82-ab8f-a2ba972e029c", 00:10:17.394 "is_configured": true, 00:10:17.394 "data_offset": 2048, 00:10:17.394 "data_size": 63488 00:10:17.394 }, 00:10:17.394 { 00:10:17.394 "name": "BaseBdev2", 00:10:17.394 "uuid": "47270511-f777-577e-9d38-868e019c2231", 00:10:17.394 "is_configured": true, 00:10:17.394 "data_offset": 2048, 00:10:17.394 "data_size": 63488 00:10:17.394 }, 00:10:17.394 { 00:10:17.394 "name": "BaseBdev3", 00:10:17.394 "uuid": "c47501c3-96a4-5df2-9bfe-260fe7342252", 00:10:17.394 "is_configured": true, 00:10:17.394 "data_offset": 2048, 00:10:17.394 "data_size": 63488 00:10:17.394 } 00:10:17.394 ] 00:10:17.394 }' 00:10:17.394 12:02:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.394 12:02:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.654 12:02:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:17.654 12:02:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.654 12:02:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.654 [2024-11-19 12:02:21.024486] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:17.654 [2024-11-19 12:02:21.024534] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:17.654 [2024-11-19 12:02:21.027211] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:17.654 [2024-11-19 12:02:21.027262] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:17.654 [2024-11-19 12:02:21.027364] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:17.654 [2024-11-19 12:02:21.027375] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:17.915 { 00:10:17.915 "results": [ 00:10:17.915 { 00:10:17.915 "job": "raid_bdev1", 00:10:17.915 "core_mask": "0x1", 00:10:17.915 "workload": "randrw", 00:10:17.915 "percentage": 50, 00:10:17.915 "status": "finished", 00:10:17.915 "queue_depth": 1, 00:10:17.915 "io_size": 131072, 00:10:17.915 "runtime": 1.375571, 00:10:17.915 "iops": 13797.906469386167, 00:10:17.915 "mibps": 1724.738308673271, 00:10:17.915 "io_failed": 0, 00:10:17.915 "io_timeout": 0, 00:10:17.915 "avg_latency_us": 69.94930154011807, 00:10:17.915 "min_latency_us": 22.246288209606988, 00:10:17.915 "max_latency_us": 1287.825327510917 00:10:17.915 } 00:10:17.915 ], 00:10:17.915 "core_count": 1 00:10:17.915 } 00:10:17.915 12:02:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.915 12:02:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69117 00:10:17.915 12:02:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 69117 ']' 00:10:17.915 12:02:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 69117 00:10:17.915 12:02:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:17.915 12:02:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:17.915 12:02:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69117 00:10:17.915 12:02:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:17.915 12:02:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:17.915 killing process with pid 69117 00:10:17.915 12:02:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69117' 00:10:17.915 12:02:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 69117 00:10:17.915 [2024-11-19 12:02:21.072535] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:17.915 12:02:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 69117 00:10:18.174 [2024-11-19 12:02:21.302732] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:19.113 12:02:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.IzqnOJ5CKZ 00:10:19.113 12:02:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:19.113 12:02:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:19.113 12:02:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:19.113 12:02:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:19.113 12:02:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:19.113 12:02:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:19.113 12:02:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:19.113 00:10:19.113 real 0m4.553s 00:10:19.113 user 0m5.436s 00:10:19.113 sys 0m0.561s 00:10:19.113 12:02:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:19.113 12:02:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.113 ************************************ 00:10:19.113 END TEST raid_read_error_test 00:10:19.113 ************************************ 00:10:19.373 12:02:22 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:10:19.373 12:02:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:19.373 12:02:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:19.373 12:02:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:19.373 ************************************ 00:10:19.373 START TEST raid_write_error_test 00:10:19.373 ************************************ 00:10:19.373 12:02:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:10:19.373 12:02:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:19.373 12:02:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:19.373 12:02:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:19.373 12:02:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:19.373 12:02:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:19.373 12:02:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:19.373 12:02:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:19.373 12:02:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:19.373 12:02:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:19.373 12:02:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:19.373 12:02:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:19.373 12:02:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:19.373 12:02:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:19.373 12:02:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:19.373 12:02:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:19.373 12:02:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:19.373 12:02:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:19.373 12:02:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:19.373 12:02:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:19.373 12:02:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:19.373 12:02:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:19.373 12:02:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:19.373 12:02:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:19.373 12:02:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:19.373 12:02:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.eA1Pf6BouA 00:10:19.373 12:02:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69261 00:10:19.373 12:02:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69261 00:10:19.373 12:02:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:19.373 12:02:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 69261 ']' 00:10:19.373 12:02:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:19.373 12:02:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:19.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:19.373 12:02:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:19.373 12:02:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:19.373 12:02:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.373 [2024-11-19 12:02:22.648016] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:10:19.373 [2024-11-19 12:02:22.648133] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69261 ] 00:10:19.633 [2024-11-19 12:02:22.806448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.633 [2024-11-19 12:02:22.916746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.892 [2024-11-19 12:02:23.109454] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:19.892 [2024-11-19 12:02:23.109514] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:20.152 12:02:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:20.152 12:02:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:20.152 12:02:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:20.152 12:02:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:20.152 12:02:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.152 12:02:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.152 BaseBdev1_malloc 00:10:20.152 12:02:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.152 12:02:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:20.152 12:02:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.152 12:02:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.152 true 00:10:20.152 12:02:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.152 12:02:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:20.152 12:02:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.152 12:02:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.152 [2024-11-19 12:02:23.524628] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:20.152 [2024-11-19 12:02:23.524691] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.152 [2024-11-19 12:02:23.524711] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:20.152 [2024-11-19 12:02:23.524723] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.413 [2024-11-19 12:02:23.526869] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.413 [2024-11-19 12:02:23.526909] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:20.413 BaseBdev1 00:10:20.413 12:02:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.413 12:02:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:20.413 12:02:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:20.413 12:02:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.413 12:02:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.413 BaseBdev2_malloc 00:10:20.413 12:02:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.413 12:02:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:20.413 12:02:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.413 12:02:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.413 true 00:10:20.413 12:02:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.413 12:02:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:20.413 12:02:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.413 12:02:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.413 [2024-11-19 12:02:23.588405] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:20.413 [2024-11-19 12:02:23.588461] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.413 [2024-11-19 12:02:23.588476] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:20.413 [2024-11-19 12:02:23.588486] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.413 [2024-11-19 12:02:23.590353] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.413 [2024-11-19 12:02:23.590388] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:20.413 BaseBdev2 00:10:20.413 12:02:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.413 12:02:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:20.413 12:02:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:20.413 12:02:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.413 12:02:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.413 BaseBdev3_malloc 00:10:20.413 12:02:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.413 12:02:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:20.413 12:02:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.413 12:02:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.413 true 00:10:20.413 12:02:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.413 12:02:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:20.413 12:02:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.413 12:02:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.413 [2024-11-19 12:02:23.667429] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:20.413 [2024-11-19 12:02:23.667486] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.413 [2024-11-19 12:02:23.667504] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:20.413 [2024-11-19 12:02:23.667515] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.413 [2024-11-19 12:02:23.669633] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.413 [2024-11-19 12:02:23.669670] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:20.413 BaseBdev3 00:10:20.414 12:02:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.414 12:02:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:20.414 12:02:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.414 12:02:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.414 [2024-11-19 12:02:23.675482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:20.414 [2024-11-19 12:02:23.677220] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:20.414 [2024-11-19 12:02:23.677294] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:20.414 [2024-11-19 12:02:23.677475] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:20.414 [2024-11-19 12:02:23.677492] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:20.414 [2024-11-19 12:02:23.677711] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:20.414 [2024-11-19 12:02:23.677882] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:20.414 [2024-11-19 12:02:23.677899] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:20.414 [2024-11-19 12:02:23.678058] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:20.414 12:02:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.414 12:02:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:20.414 12:02:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:20.414 12:02:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:20.414 12:02:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:20.414 12:02:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:20.414 12:02:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:20.414 12:02:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.414 12:02:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.414 12:02:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.414 12:02:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.414 12:02:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.414 12:02:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.414 12:02:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:20.414 12:02:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.414 12:02:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.414 12:02:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.414 "name": "raid_bdev1", 00:10:20.414 "uuid": "cfa74b27-a02a-4c84-882a-1cec2f7624ee", 00:10:20.414 "strip_size_kb": 0, 00:10:20.414 "state": "online", 00:10:20.414 "raid_level": "raid1", 00:10:20.414 "superblock": true, 00:10:20.414 "num_base_bdevs": 3, 00:10:20.414 "num_base_bdevs_discovered": 3, 00:10:20.414 "num_base_bdevs_operational": 3, 00:10:20.414 "base_bdevs_list": [ 00:10:20.414 { 00:10:20.414 "name": "BaseBdev1", 00:10:20.414 "uuid": "ec4e8bf2-303b-5a49-ac39-cef9430d4de2", 00:10:20.414 "is_configured": true, 00:10:20.414 "data_offset": 2048, 00:10:20.414 "data_size": 63488 00:10:20.414 }, 00:10:20.414 { 00:10:20.414 "name": "BaseBdev2", 00:10:20.414 "uuid": "9e478d34-4445-506e-8a8f-f4ad1df0e6ee", 00:10:20.414 "is_configured": true, 00:10:20.414 "data_offset": 2048, 00:10:20.414 "data_size": 63488 00:10:20.414 }, 00:10:20.414 { 00:10:20.414 "name": "BaseBdev3", 00:10:20.414 "uuid": "fd0c9027-c1e0-5420-9553-c4b39fa4b7dd", 00:10:20.414 "is_configured": true, 00:10:20.414 "data_offset": 2048, 00:10:20.414 "data_size": 63488 00:10:20.414 } 00:10:20.414 ] 00:10:20.414 }' 00:10:20.414 12:02:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.414 12:02:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.044 12:02:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:21.044 12:02:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:21.044 [2024-11-19 12:02:24.211831] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:21.983 12:02:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:21.983 12:02:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.983 12:02:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.983 [2024-11-19 12:02:25.139464] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:10:21.983 [2024-11-19 12:02:25.139532] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:21.983 [2024-11-19 12:02:25.139746] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:10:21.983 12:02:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.983 12:02:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:21.983 12:02:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:21.983 12:02:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:10:21.983 12:02:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:10:21.983 12:02:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:21.983 12:02:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:21.983 12:02:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:21.983 12:02:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:21.983 12:02:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:21.983 12:02:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:21.983 12:02:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.983 12:02:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.983 12:02:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.983 12:02:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.983 12:02:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.983 12:02:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:21.983 12:02:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.983 12:02:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.983 12:02:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.983 12:02:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.983 "name": "raid_bdev1", 00:10:21.983 "uuid": "cfa74b27-a02a-4c84-882a-1cec2f7624ee", 00:10:21.983 "strip_size_kb": 0, 00:10:21.983 "state": "online", 00:10:21.983 "raid_level": "raid1", 00:10:21.983 "superblock": true, 00:10:21.983 "num_base_bdevs": 3, 00:10:21.983 "num_base_bdevs_discovered": 2, 00:10:21.983 "num_base_bdevs_operational": 2, 00:10:21.983 "base_bdevs_list": [ 00:10:21.983 { 00:10:21.983 "name": null, 00:10:21.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.983 "is_configured": false, 00:10:21.983 "data_offset": 0, 00:10:21.983 "data_size": 63488 00:10:21.983 }, 00:10:21.983 { 00:10:21.983 "name": "BaseBdev2", 00:10:21.983 "uuid": "9e478d34-4445-506e-8a8f-f4ad1df0e6ee", 00:10:21.983 "is_configured": true, 00:10:21.983 "data_offset": 2048, 00:10:21.983 "data_size": 63488 00:10:21.983 }, 00:10:21.983 { 00:10:21.983 "name": "BaseBdev3", 00:10:21.983 "uuid": "fd0c9027-c1e0-5420-9553-c4b39fa4b7dd", 00:10:21.983 "is_configured": true, 00:10:21.983 "data_offset": 2048, 00:10:21.983 "data_size": 63488 00:10:21.983 } 00:10:21.983 ] 00:10:21.983 }' 00:10:21.983 12:02:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.983 12:02:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.243 12:02:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:22.243 12:02:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.243 12:02:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.243 [2024-11-19 12:02:25.545128] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:22.243 [2024-11-19 12:02:25.545174] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:22.243 [2024-11-19 12:02:25.547681] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:22.243 [2024-11-19 12:02:25.547736] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:22.243 [2024-11-19 12:02:25.547813] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:22.243 [2024-11-19 12:02:25.547836] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:22.243 { 00:10:22.243 "results": [ 00:10:22.243 { 00:10:22.243 "job": "raid_bdev1", 00:10:22.243 "core_mask": "0x1", 00:10:22.243 "workload": "randrw", 00:10:22.243 "percentage": 50, 00:10:22.243 "status": "finished", 00:10:22.243 "queue_depth": 1, 00:10:22.243 "io_size": 131072, 00:10:22.243 "runtime": 1.33413, 00:10:22.243 "iops": 15170.18581397615, 00:10:22.243 "mibps": 1896.2732267470187, 00:10:22.243 "io_failed": 0, 00:10:22.243 "io_timeout": 0, 00:10:22.243 "avg_latency_us": 63.38780964849956, 00:10:22.243 "min_latency_us": 22.46986899563319, 00:10:22.243 "max_latency_us": 1330.7528384279476 00:10:22.243 } 00:10:22.243 ], 00:10:22.243 "core_count": 1 00:10:22.243 } 00:10:22.243 12:02:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.243 12:02:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69261 00:10:22.243 12:02:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 69261 ']' 00:10:22.243 12:02:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 69261 00:10:22.243 12:02:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:22.243 12:02:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:22.243 12:02:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69261 00:10:22.243 12:02:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:22.243 12:02:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:22.243 killing process with pid 69261 00:10:22.243 12:02:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69261' 00:10:22.243 12:02:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 69261 00:10:22.243 [2024-11-19 12:02:25.593076] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:22.243 12:02:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 69261 00:10:22.503 [2024-11-19 12:02:25.820469] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:23.883 12:02:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:23.883 12:02:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.eA1Pf6BouA 00:10:23.883 12:02:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:23.883 12:02:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:23.883 12:02:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:23.883 12:02:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:23.883 12:02:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:23.883 12:02:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:23.883 00:10:23.883 real 0m4.432s 00:10:23.883 user 0m5.244s 00:10:23.883 sys 0m0.536s 00:10:23.883 12:02:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:23.883 12:02:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.883 ************************************ 00:10:23.883 END TEST raid_write_error_test 00:10:23.883 ************************************ 00:10:23.883 12:02:27 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:10:23.883 12:02:27 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:23.883 12:02:27 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:10:23.883 12:02:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:23.883 12:02:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:23.883 12:02:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:23.883 ************************************ 00:10:23.883 START TEST raid_state_function_test 00:10:23.883 ************************************ 00:10:23.883 12:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:10:23.883 12:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:23.883 12:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:23.883 12:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:23.883 12:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:23.883 12:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:23.883 12:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:23.883 12:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:23.883 12:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:23.883 12:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:23.883 12:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:23.883 12:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:23.883 12:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:23.883 12:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:23.883 12:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:23.883 12:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:23.884 12:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:23.884 12:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:23.884 12:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:23.884 12:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:23.884 12:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:23.884 12:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:23.884 12:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:23.884 12:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:23.884 12:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:23.884 12:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:23.884 12:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:23.884 12:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:23.884 12:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:23.884 12:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:23.884 12:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69401 00:10:23.884 12:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:23.884 12:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69401' 00:10:23.884 Process raid pid: 69401 00:10:23.884 12:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69401 00:10:23.884 12:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 69401 ']' 00:10:23.884 12:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:23.884 12:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:23.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:23.884 12:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:23.884 12:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:23.884 12:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.884 [2024-11-19 12:02:27.145994] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:10:23.884 [2024-11-19 12:02:27.146591] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:24.144 [2024-11-19 12:02:27.325657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:24.144 [2024-11-19 12:02:27.441593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.403 [2024-11-19 12:02:27.632232] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:24.403 [2024-11-19 12:02:27.632289] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:24.663 12:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:24.663 12:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:24.663 12:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:24.663 12:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.663 12:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.663 [2024-11-19 12:02:27.972070] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:24.663 [2024-11-19 12:02:27.972134] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:24.663 [2024-11-19 12:02:27.972144] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:24.663 [2024-11-19 12:02:27.972155] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:24.663 [2024-11-19 12:02:27.972162] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:24.663 [2024-11-19 12:02:27.972171] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:24.663 [2024-11-19 12:02:27.972177] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:24.663 [2024-11-19 12:02:27.972186] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:24.663 12:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.663 12:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:24.663 12:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:24.663 12:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:24.663 12:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:24.663 12:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:24.663 12:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:24.663 12:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.663 12:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.663 12:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.663 12:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.663 12:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:24.663 12:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.663 12:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.663 12:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.663 12:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.663 12:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.663 "name": "Existed_Raid", 00:10:24.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.663 "strip_size_kb": 64, 00:10:24.663 "state": "configuring", 00:10:24.663 "raid_level": "raid0", 00:10:24.663 "superblock": false, 00:10:24.663 "num_base_bdevs": 4, 00:10:24.663 "num_base_bdevs_discovered": 0, 00:10:24.663 "num_base_bdevs_operational": 4, 00:10:24.663 "base_bdevs_list": [ 00:10:24.663 { 00:10:24.663 "name": "BaseBdev1", 00:10:24.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.663 "is_configured": false, 00:10:24.663 "data_offset": 0, 00:10:24.663 "data_size": 0 00:10:24.663 }, 00:10:24.663 { 00:10:24.663 "name": "BaseBdev2", 00:10:24.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.663 "is_configured": false, 00:10:24.663 "data_offset": 0, 00:10:24.663 "data_size": 0 00:10:24.663 }, 00:10:24.663 { 00:10:24.663 "name": "BaseBdev3", 00:10:24.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.663 "is_configured": false, 00:10:24.663 "data_offset": 0, 00:10:24.663 "data_size": 0 00:10:24.663 }, 00:10:24.663 { 00:10:24.663 "name": "BaseBdev4", 00:10:24.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.663 "is_configured": false, 00:10:24.663 "data_offset": 0, 00:10:24.663 "data_size": 0 00:10:24.663 } 00:10:24.663 ] 00:10:24.663 }' 00:10:24.663 12:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.663 12:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.232 12:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:25.232 12:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.232 12:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.232 [2024-11-19 12:02:28.407251] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:25.232 [2024-11-19 12:02:28.407297] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:25.232 12:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.232 12:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:25.232 12:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.232 12:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.232 [2024-11-19 12:02:28.419218] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:25.232 [2024-11-19 12:02:28.419261] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:25.232 [2024-11-19 12:02:28.419270] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:25.232 [2024-11-19 12:02:28.419279] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:25.232 [2024-11-19 12:02:28.419286] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:25.232 [2024-11-19 12:02:28.419294] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:25.232 [2024-11-19 12:02:28.419301] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:25.232 [2024-11-19 12:02:28.419310] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:25.232 12:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.232 12:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:25.232 12:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.232 12:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.232 [2024-11-19 12:02:28.465931] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:25.232 BaseBdev1 00:10:25.232 12:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.232 12:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:25.232 12:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:25.232 12:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:25.232 12:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:25.232 12:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:25.232 12:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:25.232 12:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:25.232 12:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.232 12:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.232 12:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.232 12:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:25.232 12:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.232 12:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.232 [ 00:10:25.232 { 00:10:25.232 "name": "BaseBdev1", 00:10:25.232 "aliases": [ 00:10:25.232 "4721b97a-c165-441a-8f01-a2d44299812a" 00:10:25.232 ], 00:10:25.232 "product_name": "Malloc disk", 00:10:25.232 "block_size": 512, 00:10:25.232 "num_blocks": 65536, 00:10:25.232 "uuid": "4721b97a-c165-441a-8f01-a2d44299812a", 00:10:25.232 "assigned_rate_limits": { 00:10:25.232 "rw_ios_per_sec": 0, 00:10:25.232 "rw_mbytes_per_sec": 0, 00:10:25.232 "r_mbytes_per_sec": 0, 00:10:25.232 "w_mbytes_per_sec": 0 00:10:25.232 }, 00:10:25.232 "claimed": true, 00:10:25.232 "claim_type": "exclusive_write", 00:10:25.232 "zoned": false, 00:10:25.232 "supported_io_types": { 00:10:25.232 "read": true, 00:10:25.232 "write": true, 00:10:25.232 "unmap": true, 00:10:25.232 "flush": true, 00:10:25.232 "reset": true, 00:10:25.232 "nvme_admin": false, 00:10:25.232 "nvme_io": false, 00:10:25.232 "nvme_io_md": false, 00:10:25.232 "write_zeroes": true, 00:10:25.232 "zcopy": true, 00:10:25.232 "get_zone_info": false, 00:10:25.232 "zone_management": false, 00:10:25.232 "zone_append": false, 00:10:25.232 "compare": false, 00:10:25.232 "compare_and_write": false, 00:10:25.232 "abort": true, 00:10:25.232 "seek_hole": false, 00:10:25.232 "seek_data": false, 00:10:25.232 "copy": true, 00:10:25.232 "nvme_iov_md": false 00:10:25.232 }, 00:10:25.232 "memory_domains": [ 00:10:25.232 { 00:10:25.232 "dma_device_id": "system", 00:10:25.232 "dma_device_type": 1 00:10:25.232 }, 00:10:25.232 { 00:10:25.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.232 "dma_device_type": 2 00:10:25.232 } 00:10:25.232 ], 00:10:25.232 "driver_specific": {} 00:10:25.232 } 00:10:25.232 ] 00:10:25.232 12:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.233 12:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:25.233 12:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:25.233 12:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:25.233 12:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:25.233 12:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:25.233 12:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:25.233 12:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:25.233 12:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.233 12:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.233 12:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.233 12:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.233 12:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.233 12:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:25.233 12:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.233 12:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.233 12:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.233 12:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.233 "name": "Existed_Raid", 00:10:25.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.233 "strip_size_kb": 64, 00:10:25.233 "state": "configuring", 00:10:25.233 "raid_level": "raid0", 00:10:25.233 "superblock": false, 00:10:25.233 "num_base_bdevs": 4, 00:10:25.233 "num_base_bdevs_discovered": 1, 00:10:25.233 "num_base_bdevs_operational": 4, 00:10:25.233 "base_bdevs_list": [ 00:10:25.233 { 00:10:25.233 "name": "BaseBdev1", 00:10:25.233 "uuid": "4721b97a-c165-441a-8f01-a2d44299812a", 00:10:25.233 "is_configured": true, 00:10:25.233 "data_offset": 0, 00:10:25.233 "data_size": 65536 00:10:25.233 }, 00:10:25.233 { 00:10:25.233 "name": "BaseBdev2", 00:10:25.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.233 "is_configured": false, 00:10:25.233 "data_offset": 0, 00:10:25.233 "data_size": 0 00:10:25.233 }, 00:10:25.233 { 00:10:25.233 "name": "BaseBdev3", 00:10:25.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.233 "is_configured": false, 00:10:25.233 "data_offset": 0, 00:10:25.233 "data_size": 0 00:10:25.233 }, 00:10:25.233 { 00:10:25.233 "name": "BaseBdev4", 00:10:25.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.233 "is_configured": false, 00:10:25.233 "data_offset": 0, 00:10:25.233 "data_size": 0 00:10:25.233 } 00:10:25.233 ] 00:10:25.233 }' 00:10:25.233 12:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.233 12:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.802 12:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:25.802 12:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.802 12:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.802 [2024-11-19 12:02:28.889243] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:25.802 [2024-11-19 12:02:28.889306] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:25.802 12:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.802 12:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:25.802 12:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.802 12:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.802 [2024-11-19 12:02:28.897272] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:25.802 [2024-11-19 12:02:28.898984] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:25.802 [2024-11-19 12:02:28.899047] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:25.802 [2024-11-19 12:02:28.899057] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:25.802 [2024-11-19 12:02:28.899068] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:25.802 [2024-11-19 12:02:28.899075] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:25.802 [2024-11-19 12:02:28.899083] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:25.802 12:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.802 12:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:25.802 12:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:25.802 12:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:25.802 12:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:25.802 12:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:25.802 12:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:25.802 12:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:25.802 12:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:25.802 12:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.802 12:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.802 12:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.802 12:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.802 12:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.802 12:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.802 12:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:25.802 12:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.802 12:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.802 12:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.802 "name": "Existed_Raid", 00:10:25.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.802 "strip_size_kb": 64, 00:10:25.802 "state": "configuring", 00:10:25.802 "raid_level": "raid0", 00:10:25.802 "superblock": false, 00:10:25.802 "num_base_bdevs": 4, 00:10:25.802 "num_base_bdevs_discovered": 1, 00:10:25.802 "num_base_bdevs_operational": 4, 00:10:25.802 "base_bdevs_list": [ 00:10:25.802 { 00:10:25.802 "name": "BaseBdev1", 00:10:25.802 "uuid": "4721b97a-c165-441a-8f01-a2d44299812a", 00:10:25.802 "is_configured": true, 00:10:25.802 "data_offset": 0, 00:10:25.802 "data_size": 65536 00:10:25.802 }, 00:10:25.802 { 00:10:25.802 "name": "BaseBdev2", 00:10:25.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.802 "is_configured": false, 00:10:25.802 "data_offset": 0, 00:10:25.802 "data_size": 0 00:10:25.802 }, 00:10:25.802 { 00:10:25.802 "name": "BaseBdev3", 00:10:25.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.802 "is_configured": false, 00:10:25.802 "data_offset": 0, 00:10:25.802 "data_size": 0 00:10:25.802 }, 00:10:25.802 { 00:10:25.802 "name": "BaseBdev4", 00:10:25.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.802 "is_configured": false, 00:10:25.802 "data_offset": 0, 00:10:25.802 "data_size": 0 00:10:25.802 } 00:10:25.802 ] 00:10:25.802 }' 00:10:25.802 12:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.802 12:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.061 12:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:26.061 12:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.061 12:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.061 [2024-11-19 12:02:29.388406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:26.061 BaseBdev2 00:10:26.061 12:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.061 12:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:26.061 12:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:26.061 12:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:26.061 12:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:26.061 12:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:26.061 12:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:26.061 12:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:26.061 12:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.061 12:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.061 12:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.061 12:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:26.061 12:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.061 12:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.061 [ 00:10:26.061 { 00:10:26.061 "name": "BaseBdev2", 00:10:26.061 "aliases": [ 00:10:26.061 "90de8e36-2d98-4667-aa1b-4c51f7346a8c" 00:10:26.061 ], 00:10:26.061 "product_name": "Malloc disk", 00:10:26.061 "block_size": 512, 00:10:26.061 "num_blocks": 65536, 00:10:26.061 "uuid": "90de8e36-2d98-4667-aa1b-4c51f7346a8c", 00:10:26.061 "assigned_rate_limits": { 00:10:26.061 "rw_ios_per_sec": 0, 00:10:26.061 "rw_mbytes_per_sec": 0, 00:10:26.061 "r_mbytes_per_sec": 0, 00:10:26.061 "w_mbytes_per_sec": 0 00:10:26.061 }, 00:10:26.061 "claimed": true, 00:10:26.061 "claim_type": "exclusive_write", 00:10:26.061 "zoned": false, 00:10:26.061 "supported_io_types": { 00:10:26.061 "read": true, 00:10:26.061 "write": true, 00:10:26.061 "unmap": true, 00:10:26.061 "flush": true, 00:10:26.061 "reset": true, 00:10:26.061 "nvme_admin": false, 00:10:26.061 "nvme_io": false, 00:10:26.061 "nvme_io_md": false, 00:10:26.061 "write_zeroes": true, 00:10:26.061 "zcopy": true, 00:10:26.061 "get_zone_info": false, 00:10:26.061 "zone_management": false, 00:10:26.061 "zone_append": false, 00:10:26.061 "compare": false, 00:10:26.061 "compare_and_write": false, 00:10:26.061 "abort": true, 00:10:26.061 "seek_hole": false, 00:10:26.061 "seek_data": false, 00:10:26.061 "copy": true, 00:10:26.061 "nvme_iov_md": false 00:10:26.061 }, 00:10:26.061 "memory_domains": [ 00:10:26.061 { 00:10:26.061 "dma_device_id": "system", 00:10:26.061 "dma_device_type": 1 00:10:26.061 }, 00:10:26.061 { 00:10:26.061 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.061 "dma_device_type": 2 00:10:26.061 } 00:10:26.061 ], 00:10:26.061 "driver_specific": {} 00:10:26.061 } 00:10:26.061 ] 00:10:26.061 12:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.061 12:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:26.061 12:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:26.061 12:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:26.061 12:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:26.061 12:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:26.061 12:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:26.061 12:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:26.061 12:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:26.061 12:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:26.061 12:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.061 12:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.061 12:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.061 12:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.061 12:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.061 12:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.061 12:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.061 12:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.320 12:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.320 12:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.320 "name": "Existed_Raid", 00:10:26.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.320 "strip_size_kb": 64, 00:10:26.320 "state": "configuring", 00:10:26.320 "raid_level": "raid0", 00:10:26.320 "superblock": false, 00:10:26.320 "num_base_bdevs": 4, 00:10:26.320 "num_base_bdevs_discovered": 2, 00:10:26.320 "num_base_bdevs_operational": 4, 00:10:26.320 "base_bdevs_list": [ 00:10:26.320 { 00:10:26.320 "name": "BaseBdev1", 00:10:26.320 "uuid": "4721b97a-c165-441a-8f01-a2d44299812a", 00:10:26.320 "is_configured": true, 00:10:26.320 "data_offset": 0, 00:10:26.320 "data_size": 65536 00:10:26.320 }, 00:10:26.320 { 00:10:26.320 "name": "BaseBdev2", 00:10:26.320 "uuid": "90de8e36-2d98-4667-aa1b-4c51f7346a8c", 00:10:26.320 "is_configured": true, 00:10:26.320 "data_offset": 0, 00:10:26.320 "data_size": 65536 00:10:26.320 }, 00:10:26.320 { 00:10:26.320 "name": "BaseBdev3", 00:10:26.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.320 "is_configured": false, 00:10:26.320 "data_offset": 0, 00:10:26.320 "data_size": 0 00:10:26.320 }, 00:10:26.320 { 00:10:26.320 "name": "BaseBdev4", 00:10:26.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.320 "is_configured": false, 00:10:26.320 "data_offset": 0, 00:10:26.320 "data_size": 0 00:10:26.320 } 00:10:26.320 ] 00:10:26.320 }' 00:10:26.320 12:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.320 12:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.580 12:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:26.580 12:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.580 12:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.580 [2024-11-19 12:02:29.932217] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:26.580 BaseBdev3 00:10:26.580 12:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.580 12:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:26.580 12:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:26.580 12:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:26.580 12:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:26.580 12:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:26.580 12:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:26.580 12:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:26.580 12:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.580 12:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.580 12:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.580 12:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:26.580 12:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.580 12:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.839 [ 00:10:26.839 { 00:10:26.839 "name": "BaseBdev3", 00:10:26.839 "aliases": [ 00:10:26.839 "67586ac2-1a6d-4a8f-be49-dd37d26b288c" 00:10:26.839 ], 00:10:26.839 "product_name": "Malloc disk", 00:10:26.839 "block_size": 512, 00:10:26.839 "num_blocks": 65536, 00:10:26.839 "uuid": "67586ac2-1a6d-4a8f-be49-dd37d26b288c", 00:10:26.839 "assigned_rate_limits": { 00:10:26.839 "rw_ios_per_sec": 0, 00:10:26.839 "rw_mbytes_per_sec": 0, 00:10:26.839 "r_mbytes_per_sec": 0, 00:10:26.839 "w_mbytes_per_sec": 0 00:10:26.839 }, 00:10:26.839 "claimed": true, 00:10:26.839 "claim_type": "exclusive_write", 00:10:26.839 "zoned": false, 00:10:26.839 "supported_io_types": { 00:10:26.839 "read": true, 00:10:26.839 "write": true, 00:10:26.839 "unmap": true, 00:10:26.839 "flush": true, 00:10:26.839 "reset": true, 00:10:26.839 "nvme_admin": false, 00:10:26.839 "nvme_io": false, 00:10:26.839 "nvme_io_md": false, 00:10:26.839 "write_zeroes": true, 00:10:26.839 "zcopy": true, 00:10:26.839 "get_zone_info": false, 00:10:26.839 "zone_management": false, 00:10:26.839 "zone_append": false, 00:10:26.839 "compare": false, 00:10:26.839 "compare_and_write": false, 00:10:26.839 "abort": true, 00:10:26.839 "seek_hole": false, 00:10:26.839 "seek_data": false, 00:10:26.839 "copy": true, 00:10:26.839 "nvme_iov_md": false 00:10:26.839 }, 00:10:26.839 "memory_domains": [ 00:10:26.839 { 00:10:26.839 "dma_device_id": "system", 00:10:26.839 "dma_device_type": 1 00:10:26.839 }, 00:10:26.839 { 00:10:26.839 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.839 "dma_device_type": 2 00:10:26.839 } 00:10:26.839 ], 00:10:26.839 "driver_specific": {} 00:10:26.839 } 00:10:26.839 ] 00:10:26.839 12:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.839 12:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:26.839 12:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:26.839 12:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:26.839 12:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:26.839 12:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:26.839 12:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:26.839 12:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:26.839 12:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:26.839 12:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:26.839 12:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.839 12:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.839 12:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.839 12:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.839 12:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.839 12:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.839 12:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.839 12:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.839 12:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.839 12:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.839 "name": "Existed_Raid", 00:10:26.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.839 "strip_size_kb": 64, 00:10:26.839 "state": "configuring", 00:10:26.839 "raid_level": "raid0", 00:10:26.839 "superblock": false, 00:10:26.839 "num_base_bdevs": 4, 00:10:26.839 "num_base_bdevs_discovered": 3, 00:10:26.839 "num_base_bdevs_operational": 4, 00:10:26.839 "base_bdevs_list": [ 00:10:26.839 { 00:10:26.839 "name": "BaseBdev1", 00:10:26.839 "uuid": "4721b97a-c165-441a-8f01-a2d44299812a", 00:10:26.839 "is_configured": true, 00:10:26.839 "data_offset": 0, 00:10:26.839 "data_size": 65536 00:10:26.839 }, 00:10:26.839 { 00:10:26.839 "name": "BaseBdev2", 00:10:26.839 "uuid": "90de8e36-2d98-4667-aa1b-4c51f7346a8c", 00:10:26.839 "is_configured": true, 00:10:26.839 "data_offset": 0, 00:10:26.839 "data_size": 65536 00:10:26.839 }, 00:10:26.839 { 00:10:26.839 "name": "BaseBdev3", 00:10:26.839 "uuid": "67586ac2-1a6d-4a8f-be49-dd37d26b288c", 00:10:26.839 "is_configured": true, 00:10:26.839 "data_offset": 0, 00:10:26.839 "data_size": 65536 00:10:26.839 }, 00:10:26.839 { 00:10:26.839 "name": "BaseBdev4", 00:10:26.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.839 "is_configured": false, 00:10:26.839 "data_offset": 0, 00:10:26.839 "data_size": 0 00:10:26.839 } 00:10:26.839 ] 00:10:26.839 }' 00:10:26.839 12:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.839 12:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.098 12:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:27.098 12:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.098 12:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.358 [2024-11-19 12:02:30.481942] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:27.358 [2024-11-19 12:02:30.482013] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:27.358 [2024-11-19 12:02:30.482025] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:27.358 [2024-11-19 12:02:30.482284] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:27.358 [2024-11-19 12:02:30.482453] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:27.358 [2024-11-19 12:02:30.482474] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:27.358 [2024-11-19 12:02:30.482723] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:27.358 BaseBdev4 00:10:27.358 12:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.358 12:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:27.358 12:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:27.358 12:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:27.358 12:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:27.358 12:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:27.358 12:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:27.358 12:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:27.358 12:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.358 12:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.358 12:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.358 12:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:27.358 12:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.358 12:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.358 [ 00:10:27.358 { 00:10:27.358 "name": "BaseBdev4", 00:10:27.358 "aliases": [ 00:10:27.358 "dff2097d-0d2e-410e-9b8e-ed507a16d772" 00:10:27.358 ], 00:10:27.358 "product_name": "Malloc disk", 00:10:27.358 "block_size": 512, 00:10:27.358 "num_blocks": 65536, 00:10:27.358 "uuid": "dff2097d-0d2e-410e-9b8e-ed507a16d772", 00:10:27.358 "assigned_rate_limits": { 00:10:27.358 "rw_ios_per_sec": 0, 00:10:27.358 "rw_mbytes_per_sec": 0, 00:10:27.358 "r_mbytes_per_sec": 0, 00:10:27.358 "w_mbytes_per_sec": 0 00:10:27.358 }, 00:10:27.358 "claimed": true, 00:10:27.358 "claim_type": "exclusive_write", 00:10:27.358 "zoned": false, 00:10:27.358 "supported_io_types": { 00:10:27.358 "read": true, 00:10:27.358 "write": true, 00:10:27.358 "unmap": true, 00:10:27.358 "flush": true, 00:10:27.358 "reset": true, 00:10:27.358 "nvme_admin": false, 00:10:27.358 "nvme_io": false, 00:10:27.358 "nvme_io_md": false, 00:10:27.358 "write_zeroes": true, 00:10:27.358 "zcopy": true, 00:10:27.358 "get_zone_info": false, 00:10:27.358 "zone_management": false, 00:10:27.358 "zone_append": false, 00:10:27.358 "compare": false, 00:10:27.358 "compare_and_write": false, 00:10:27.358 "abort": true, 00:10:27.358 "seek_hole": false, 00:10:27.358 "seek_data": false, 00:10:27.358 "copy": true, 00:10:27.358 "nvme_iov_md": false 00:10:27.358 }, 00:10:27.358 "memory_domains": [ 00:10:27.358 { 00:10:27.358 "dma_device_id": "system", 00:10:27.358 "dma_device_type": 1 00:10:27.358 }, 00:10:27.358 { 00:10:27.358 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.358 "dma_device_type": 2 00:10:27.358 } 00:10:27.358 ], 00:10:27.358 "driver_specific": {} 00:10:27.358 } 00:10:27.358 ] 00:10:27.358 12:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.358 12:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:27.358 12:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:27.358 12:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:27.358 12:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:27.358 12:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:27.358 12:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:27.358 12:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:27.358 12:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:27.358 12:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:27.358 12:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.358 12:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.358 12:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.358 12:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.358 12:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.358 12:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.358 12:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.358 12:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.358 12:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.358 12:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.358 "name": "Existed_Raid", 00:10:27.358 "uuid": "d92f3b05-6a08-488d-bae9-15aa5708b6de", 00:10:27.358 "strip_size_kb": 64, 00:10:27.359 "state": "online", 00:10:27.359 "raid_level": "raid0", 00:10:27.359 "superblock": false, 00:10:27.359 "num_base_bdevs": 4, 00:10:27.359 "num_base_bdevs_discovered": 4, 00:10:27.359 "num_base_bdevs_operational": 4, 00:10:27.359 "base_bdevs_list": [ 00:10:27.359 { 00:10:27.359 "name": "BaseBdev1", 00:10:27.359 "uuid": "4721b97a-c165-441a-8f01-a2d44299812a", 00:10:27.359 "is_configured": true, 00:10:27.359 "data_offset": 0, 00:10:27.359 "data_size": 65536 00:10:27.359 }, 00:10:27.359 { 00:10:27.359 "name": "BaseBdev2", 00:10:27.359 "uuid": "90de8e36-2d98-4667-aa1b-4c51f7346a8c", 00:10:27.359 "is_configured": true, 00:10:27.359 "data_offset": 0, 00:10:27.359 "data_size": 65536 00:10:27.359 }, 00:10:27.359 { 00:10:27.359 "name": "BaseBdev3", 00:10:27.359 "uuid": "67586ac2-1a6d-4a8f-be49-dd37d26b288c", 00:10:27.359 "is_configured": true, 00:10:27.359 "data_offset": 0, 00:10:27.359 "data_size": 65536 00:10:27.359 }, 00:10:27.359 { 00:10:27.359 "name": "BaseBdev4", 00:10:27.359 "uuid": "dff2097d-0d2e-410e-9b8e-ed507a16d772", 00:10:27.359 "is_configured": true, 00:10:27.359 "data_offset": 0, 00:10:27.359 "data_size": 65536 00:10:27.359 } 00:10:27.359 ] 00:10:27.359 }' 00:10:27.359 12:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.359 12:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.618 12:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:27.618 12:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:27.618 12:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:27.618 12:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:27.618 12:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:27.618 12:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:27.618 12:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:27.618 12:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:27.618 12:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.619 12:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.619 [2024-11-19 12:02:30.949546] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:27.619 12:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.619 12:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:27.619 "name": "Existed_Raid", 00:10:27.619 "aliases": [ 00:10:27.619 "d92f3b05-6a08-488d-bae9-15aa5708b6de" 00:10:27.619 ], 00:10:27.619 "product_name": "Raid Volume", 00:10:27.619 "block_size": 512, 00:10:27.619 "num_blocks": 262144, 00:10:27.619 "uuid": "d92f3b05-6a08-488d-bae9-15aa5708b6de", 00:10:27.619 "assigned_rate_limits": { 00:10:27.619 "rw_ios_per_sec": 0, 00:10:27.619 "rw_mbytes_per_sec": 0, 00:10:27.619 "r_mbytes_per_sec": 0, 00:10:27.619 "w_mbytes_per_sec": 0 00:10:27.619 }, 00:10:27.619 "claimed": false, 00:10:27.619 "zoned": false, 00:10:27.619 "supported_io_types": { 00:10:27.619 "read": true, 00:10:27.619 "write": true, 00:10:27.619 "unmap": true, 00:10:27.619 "flush": true, 00:10:27.619 "reset": true, 00:10:27.619 "nvme_admin": false, 00:10:27.619 "nvme_io": false, 00:10:27.619 "nvme_io_md": false, 00:10:27.619 "write_zeroes": true, 00:10:27.619 "zcopy": false, 00:10:27.619 "get_zone_info": false, 00:10:27.619 "zone_management": false, 00:10:27.619 "zone_append": false, 00:10:27.619 "compare": false, 00:10:27.619 "compare_and_write": false, 00:10:27.619 "abort": false, 00:10:27.619 "seek_hole": false, 00:10:27.619 "seek_data": false, 00:10:27.619 "copy": false, 00:10:27.619 "nvme_iov_md": false 00:10:27.619 }, 00:10:27.619 "memory_domains": [ 00:10:27.619 { 00:10:27.619 "dma_device_id": "system", 00:10:27.619 "dma_device_type": 1 00:10:27.619 }, 00:10:27.619 { 00:10:27.619 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.619 "dma_device_type": 2 00:10:27.619 }, 00:10:27.619 { 00:10:27.619 "dma_device_id": "system", 00:10:27.619 "dma_device_type": 1 00:10:27.619 }, 00:10:27.619 { 00:10:27.619 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.619 "dma_device_type": 2 00:10:27.619 }, 00:10:27.619 { 00:10:27.619 "dma_device_id": "system", 00:10:27.619 "dma_device_type": 1 00:10:27.619 }, 00:10:27.619 { 00:10:27.619 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.619 "dma_device_type": 2 00:10:27.619 }, 00:10:27.619 { 00:10:27.619 "dma_device_id": "system", 00:10:27.619 "dma_device_type": 1 00:10:27.619 }, 00:10:27.619 { 00:10:27.619 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.619 "dma_device_type": 2 00:10:27.619 } 00:10:27.619 ], 00:10:27.619 "driver_specific": { 00:10:27.619 "raid": { 00:10:27.619 "uuid": "d92f3b05-6a08-488d-bae9-15aa5708b6de", 00:10:27.619 "strip_size_kb": 64, 00:10:27.619 "state": "online", 00:10:27.619 "raid_level": "raid0", 00:10:27.619 "superblock": false, 00:10:27.619 "num_base_bdevs": 4, 00:10:27.619 "num_base_bdevs_discovered": 4, 00:10:27.619 "num_base_bdevs_operational": 4, 00:10:27.619 "base_bdevs_list": [ 00:10:27.619 { 00:10:27.619 "name": "BaseBdev1", 00:10:27.619 "uuid": "4721b97a-c165-441a-8f01-a2d44299812a", 00:10:27.619 "is_configured": true, 00:10:27.619 "data_offset": 0, 00:10:27.619 "data_size": 65536 00:10:27.619 }, 00:10:27.619 { 00:10:27.619 "name": "BaseBdev2", 00:10:27.619 "uuid": "90de8e36-2d98-4667-aa1b-4c51f7346a8c", 00:10:27.619 "is_configured": true, 00:10:27.619 "data_offset": 0, 00:10:27.619 "data_size": 65536 00:10:27.619 }, 00:10:27.619 { 00:10:27.619 "name": "BaseBdev3", 00:10:27.619 "uuid": "67586ac2-1a6d-4a8f-be49-dd37d26b288c", 00:10:27.619 "is_configured": true, 00:10:27.619 "data_offset": 0, 00:10:27.619 "data_size": 65536 00:10:27.619 }, 00:10:27.619 { 00:10:27.619 "name": "BaseBdev4", 00:10:27.619 "uuid": "dff2097d-0d2e-410e-9b8e-ed507a16d772", 00:10:27.619 "is_configured": true, 00:10:27.619 "data_offset": 0, 00:10:27.619 "data_size": 65536 00:10:27.619 } 00:10:27.619 ] 00:10:27.619 } 00:10:27.619 } 00:10:27.619 }' 00:10:27.619 12:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:27.878 12:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:27.878 BaseBdev2 00:10:27.878 BaseBdev3 00:10:27.878 BaseBdev4' 00:10:27.878 12:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:27.878 12:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:27.878 12:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:27.878 12:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:27.878 12:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:27.878 12:02:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.878 12:02:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.878 12:02:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.878 12:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:27.878 12:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:27.878 12:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:27.878 12:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:27.878 12:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:27.878 12:02:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.878 12:02:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.878 12:02:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.878 12:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:27.878 12:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:27.878 12:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:27.878 12:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:27.878 12:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:27.878 12:02:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.878 12:02:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.878 12:02:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.878 12:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:27.878 12:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:27.878 12:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:27.878 12:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:27.878 12:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:27.878 12:02:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.878 12:02:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.878 12:02:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.878 12:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:27.878 12:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:27.878 12:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:27.878 12:02:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.878 12:02:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.878 [2024-11-19 12:02:31.208791] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:27.878 [2024-11-19 12:02:31.208828] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:27.878 [2024-11-19 12:02:31.208876] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:28.138 12:02:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.138 12:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:28.138 12:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:28.138 12:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:28.138 12:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:28.138 12:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:28.138 12:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:28.138 12:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:28.138 12:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:28.138 12:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:28.138 12:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:28.138 12:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:28.138 12:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.138 12:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.138 12:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.138 12:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.138 12:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.138 12:02:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.138 12:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:28.138 12:02:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.138 12:02:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.138 12:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.138 "name": "Existed_Raid", 00:10:28.138 "uuid": "d92f3b05-6a08-488d-bae9-15aa5708b6de", 00:10:28.138 "strip_size_kb": 64, 00:10:28.138 "state": "offline", 00:10:28.138 "raid_level": "raid0", 00:10:28.138 "superblock": false, 00:10:28.138 "num_base_bdevs": 4, 00:10:28.138 "num_base_bdevs_discovered": 3, 00:10:28.138 "num_base_bdevs_operational": 3, 00:10:28.138 "base_bdevs_list": [ 00:10:28.138 { 00:10:28.138 "name": null, 00:10:28.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.138 "is_configured": false, 00:10:28.138 "data_offset": 0, 00:10:28.138 "data_size": 65536 00:10:28.138 }, 00:10:28.138 { 00:10:28.138 "name": "BaseBdev2", 00:10:28.138 "uuid": "90de8e36-2d98-4667-aa1b-4c51f7346a8c", 00:10:28.138 "is_configured": true, 00:10:28.138 "data_offset": 0, 00:10:28.138 "data_size": 65536 00:10:28.138 }, 00:10:28.138 { 00:10:28.138 "name": "BaseBdev3", 00:10:28.138 "uuid": "67586ac2-1a6d-4a8f-be49-dd37d26b288c", 00:10:28.138 "is_configured": true, 00:10:28.138 "data_offset": 0, 00:10:28.138 "data_size": 65536 00:10:28.138 }, 00:10:28.138 { 00:10:28.138 "name": "BaseBdev4", 00:10:28.138 "uuid": "dff2097d-0d2e-410e-9b8e-ed507a16d772", 00:10:28.138 "is_configured": true, 00:10:28.138 "data_offset": 0, 00:10:28.138 "data_size": 65536 00:10:28.138 } 00:10:28.138 ] 00:10:28.138 }' 00:10:28.138 12:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.138 12:02:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.397 12:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:28.397 12:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:28.397 12:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:28.397 12:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.397 12:02:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.397 12:02:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.397 12:02:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.397 12:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:28.397 12:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:28.397 12:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:28.397 12:02:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.397 12:02:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.671 [2024-11-19 12:02:31.776467] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:28.671 12:02:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.671 12:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:28.671 12:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:28.671 12:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.671 12:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:28.671 12:02:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.671 12:02:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.671 12:02:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.671 12:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:28.671 12:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:28.671 12:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:28.671 12:02:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.671 12:02:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.671 [2024-11-19 12:02:31.931434] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:28.671 12:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.671 12:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:28.671 12:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:28.671 12:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:28.671 12:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.671 12:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.671 12:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.946 12:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.946 12:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:28.946 12:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:28.946 12:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:28.946 12:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.946 12:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.946 [2024-11-19 12:02:32.075261] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:28.946 [2024-11-19 12:02:32.075340] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:28.946 12:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.946 12:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:28.946 12:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:28.946 12:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:28.946 12:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.946 12:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.946 12:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.946 12:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.946 12:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:28.946 12:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:28.946 12:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:28.946 12:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:28.946 12:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:28.946 12:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:28.946 12:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.946 12:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.946 BaseBdev2 00:10:28.946 12:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.946 12:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:28.946 12:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:28.946 12:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:28.946 12:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:28.946 12:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:28.946 12:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:28.946 12:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:28.946 12:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.946 12:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.946 12:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.946 12:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:28.946 12:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.946 12:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.946 [ 00:10:28.946 { 00:10:28.946 "name": "BaseBdev2", 00:10:28.946 "aliases": [ 00:10:28.946 "e8d1bb62-0429-4bfb-90b3-c6a95077dad4" 00:10:28.946 ], 00:10:28.946 "product_name": "Malloc disk", 00:10:28.946 "block_size": 512, 00:10:28.946 "num_blocks": 65536, 00:10:28.947 "uuid": "e8d1bb62-0429-4bfb-90b3-c6a95077dad4", 00:10:28.947 "assigned_rate_limits": { 00:10:28.947 "rw_ios_per_sec": 0, 00:10:28.947 "rw_mbytes_per_sec": 0, 00:10:28.947 "r_mbytes_per_sec": 0, 00:10:28.947 "w_mbytes_per_sec": 0 00:10:28.947 }, 00:10:28.947 "claimed": false, 00:10:28.947 "zoned": false, 00:10:28.947 "supported_io_types": { 00:10:28.947 "read": true, 00:10:28.947 "write": true, 00:10:28.947 "unmap": true, 00:10:28.947 "flush": true, 00:10:28.947 "reset": true, 00:10:28.947 "nvme_admin": false, 00:10:28.947 "nvme_io": false, 00:10:28.947 "nvme_io_md": false, 00:10:28.947 "write_zeroes": true, 00:10:28.947 "zcopy": true, 00:10:28.947 "get_zone_info": false, 00:10:28.947 "zone_management": false, 00:10:28.947 "zone_append": false, 00:10:28.947 "compare": false, 00:10:28.947 "compare_and_write": false, 00:10:28.947 "abort": true, 00:10:28.947 "seek_hole": false, 00:10:28.947 "seek_data": false, 00:10:28.947 "copy": true, 00:10:28.947 "nvme_iov_md": false 00:10:28.947 }, 00:10:28.947 "memory_domains": [ 00:10:28.947 { 00:10:28.947 "dma_device_id": "system", 00:10:28.947 "dma_device_type": 1 00:10:28.947 }, 00:10:28.947 { 00:10:28.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.947 "dma_device_type": 2 00:10:28.947 } 00:10:28.947 ], 00:10:28.947 "driver_specific": {} 00:10:28.947 } 00:10:28.947 ] 00:10:28.947 12:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.947 12:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:28.947 12:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:28.947 12:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:28.947 12:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:28.947 12:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.947 12:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.207 BaseBdev3 00:10:29.207 12:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.207 12:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:29.207 12:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:29.207 12:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:29.207 12:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:29.207 12:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:29.207 12:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:29.207 12:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:29.207 12:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.207 12:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.207 12:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.207 12:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:29.207 12:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.207 12:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.207 [ 00:10:29.207 { 00:10:29.207 "name": "BaseBdev3", 00:10:29.207 "aliases": [ 00:10:29.207 "eb27057f-a52a-46de-8ae7-443bb90c133d" 00:10:29.207 ], 00:10:29.207 "product_name": "Malloc disk", 00:10:29.207 "block_size": 512, 00:10:29.207 "num_blocks": 65536, 00:10:29.207 "uuid": "eb27057f-a52a-46de-8ae7-443bb90c133d", 00:10:29.207 "assigned_rate_limits": { 00:10:29.207 "rw_ios_per_sec": 0, 00:10:29.207 "rw_mbytes_per_sec": 0, 00:10:29.207 "r_mbytes_per_sec": 0, 00:10:29.207 "w_mbytes_per_sec": 0 00:10:29.207 }, 00:10:29.207 "claimed": false, 00:10:29.207 "zoned": false, 00:10:29.207 "supported_io_types": { 00:10:29.207 "read": true, 00:10:29.207 "write": true, 00:10:29.207 "unmap": true, 00:10:29.207 "flush": true, 00:10:29.207 "reset": true, 00:10:29.207 "nvme_admin": false, 00:10:29.207 "nvme_io": false, 00:10:29.207 "nvme_io_md": false, 00:10:29.207 "write_zeroes": true, 00:10:29.207 "zcopy": true, 00:10:29.207 "get_zone_info": false, 00:10:29.207 "zone_management": false, 00:10:29.207 "zone_append": false, 00:10:29.207 "compare": false, 00:10:29.207 "compare_and_write": false, 00:10:29.207 "abort": true, 00:10:29.207 "seek_hole": false, 00:10:29.207 "seek_data": false, 00:10:29.207 "copy": true, 00:10:29.207 "nvme_iov_md": false 00:10:29.207 }, 00:10:29.207 "memory_domains": [ 00:10:29.207 { 00:10:29.207 "dma_device_id": "system", 00:10:29.207 "dma_device_type": 1 00:10:29.207 }, 00:10:29.207 { 00:10:29.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.207 "dma_device_type": 2 00:10:29.207 } 00:10:29.207 ], 00:10:29.207 "driver_specific": {} 00:10:29.207 } 00:10:29.207 ] 00:10:29.207 12:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.207 12:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:29.207 12:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:29.207 12:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:29.207 12:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:29.207 12:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.207 12:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.207 BaseBdev4 00:10:29.207 12:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.207 12:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:29.207 12:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:29.207 12:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:29.207 12:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:29.207 12:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:29.207 12:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:29.207 12:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:29.207 12:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.208 12:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.208 12:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.208 12:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:29.208 12:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.208 12:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.208 [ 00:10:29.208 { 00:10:29.208 "name": "BaseBdev4", 00:10:29.208 "aliases": [ 00:10:29.208 "70e4eaad-4ffe-4690-a167-ed660087b524" 00:10:29.208 ], 00:10:29.208 "product_name": "Malloc disk", 00:10:29.208 "block_size": 512, 00:10:29.208 "num_blocks": 65536, 00:10:29.208 "uuid": "70e4eaad-4ffe-4690-a167-ed660087b524", 00:10:29.208 "assigned_rate_limits": { 00:10:29.208 "rw_ios_per_sec": 0, 00:10:29.208 "rw_mbytes_per_sec": 0, 00:10:29.208 "r_mbytes_per_sec": 0, 00:10:29.208 "w_mbytes_per_sec": 0 00:10:29.208 }, 00:10:29.208 "claimed": false, 00:10:29.208 "zoned": false, 00:10:29.208 "supported_io_types": { 00:10:29.208 "read": true, 00:10:29.208 "write": true, 00:10:29.208 "unmap": true, 00:10:29.208 "flush": true, 00:10:29.208 "reset": true, 00:10:29.208 "nvme_admin": false, 00:10:29.208 "nvme_io": false, 00:10:29.208 "nvme_io_md": false, 00:10:29.208 "write_zeroes": true, 00:10:29.208 "zcopy": true, 00:10:29.208 "get_zone_info": false, 00:10:29.208 "zone_management": false, 00:10:29.208 "zone_append": false, 00:10:29.208 "compare": false, 00:10:29.208 "compare_and_write": false, 00:10:29.208 "abort": true, 00:10:29.208 "seek_hole": false, 00:10:29.208 "seek_data": false, 00:10:29.208 "copy": true, 00:10:29.208 "nvme_iov_md": false 00:10:29.208 }, 00:10:29.208 "memory_domains": [ 00:10:29.208 { 00:10:29.208 "dma_device_id": "system", 00:10:29.208 "dma_device_type": 1 00:10:29.208 }, 00:10:29.208 { 00:10:29.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.208 "dma_device_type": 2 00:10:29.208 } 00:10:29.208 ], 00:10:29.208 "driver_specific": {} 00:10:29.208 } 00:10:29.208 ] 00:10:29.208 12:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.208 12:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:29.208 12:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:29.208 12:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:29.208 12:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:29.208 12:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.208 12:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.208 [2024-11-19 12:02:32.435588] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:29.208 [2024-11-19 12:02:32.435641] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:29.208 [2024-11-19 12:02:32.435663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:29.208 [2024-11-19 12:02:32.437487] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:29.208 [2024-11-19 12:02:32.437557] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:29.208 12:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.208 12:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:29.208 12:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:29.208 12:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:29.208 12:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:29.208 12:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:29.208 12:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:29.208 12:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.208 12:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.208 12:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.208 12:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.208 12:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.208 12:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.208 12:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.208 12:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.208 12:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.208 12:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.208 "name": "Existed_Raid", 00:10:29.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.208 "strip_size_kb": 64, 00:10:29.208 "state": "configuring", 00:10:29.208 "raid_level": "raid0", 00:10:29.208 "superblock": false, 00:10:29.208 "num_base_bdevs": 4, 00:10:29.208 "num_base_bdevs_discovered": 3, 00:10:29.208 "num_base_bdevs_operational": 4, 00:10:29.208 "base_bdevs_list": [ 00:10:29.208 { 00:10:29.208 "name": "BaseBdev1", 00:10:29.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.208 "is_configured": false, 00:10:29.208 "data_offset": 0, 00:10:29.208 "data_size": 0 00:10:29.208 }, 00:10:29.208 { 00:10:29.208 "name": "BaseBdev2", 00:10:29.208 "uuid": "e8d1bb62-0429-4bfb-90b3-c6a95077dad4", 00:10:29.208 "is_configured": true, 00:10:29.208 "data_offset": 0, 00:10:29.208 "data_size": 65536 00:10:29.208 }, 00:10:29.208 { 00:10:29.208 "name": "BaseBdev3", 00:10:29.208 "uuid": "eb27057f-a52a-46de-8ae7-443bb90c133d", 00:10:29.208 "is_configured": true, 00:10:29.208 "data_offset": 0, 00:10:29.208 "data_size": 65536 00:10:29.208 }, 00:10:29.208 { 00:10:29.208 "name": "BaseBdev4", 00:10:29.208 "uuid": "70e4eaad-4ffe-4690-a167-ed660087b524", 00:10:29.208 "is_configured": true, 00:10:29.208 "data_offset": 0, 00:10:29.208 "data_size": 65536 00:10:29.208 } 00:10:29.208 ] 00:10:29.208 }' 00:10:29.208 12:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.208 12:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.778 12:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:29.778 12:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.778 12:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.778 [2024-11-19 12:02:32.859176] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:29.778 12:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.778 12:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:29.778 12:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:29.778 12:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:29.778 12:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:29.778 12:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:29.778 12:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:29.778 12:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.778 12:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.778 12:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.778 12:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.778 12:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.778 12:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.778 12:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.778 12:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.778 12:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.778 12:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.778 "name": "Existed_Raid", 00:10:29.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.778 "strip_size_kb": 64, 00:10:29.778 "state": "configuring", 00:10:29.778 "raid_level": "raid0", 00:10:29.778 "superblock": false, 00:10:29.778 "num_base_bdevs": 4, 00:10:29.778 "num_base_bdevs_discovered": 2, 00:10:29.778 "num_base_bdevs_operational": 4, 00:10:29.778 "base_bdevs_list": [ 00:10:29.778 { 00:10:29.778 "name": "BaseBdev1", 00:10:29.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.779 "is_configured": false, 00:10:29.779 "data_offset": 0, 00:10:29.779 "data_size": 0 00:10:29.779 }, 00:10:29.779 { 00:10:29.779 "name": null, 00:10:29.779 "uuid": "e8d1bb62-0429-4bfb-90b3-c6a95077dad4", 00:10:29.779 "is_configured": false, 00:10:29.779 "data_offset": 0, 00:10:29.779 "data_size": 65536 00:10:29.779 }, 00:10:29.779 { 00:10:29.779 "name": "BaseBdev3", 00:10:29.779 "uuid": "eb27057f-a52a-46de-8ae7-443bb90c133d", 00:10:29.779 "is_configured": true, 00:10:29.779 "data_offset": 0, 00:10:29.779 "data_size": 65536 00:10:29.779 }, 00:10:29.779 { 00:10:29.779 "name": "BaseBdev4", 00:10:29.779 "uuid": "70e4eaad-4ffe-4690-a167-ed660087b524", 00:10:29.779 "is_configured": true, 00:10:29.779 "data_offset": 0, 00:10:29.779 "data_size": 65536 00:10:29.779 } 00:10:29.779 ] 00:10:29.779 }' 00:10:29.779 12:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.779 12:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.039 12:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:30.039 12:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.039 12:02:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.039 12:02:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.039 12:02:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.039 12:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:30.039 12:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:30.039 12:02:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.039 12:02:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.039 [2024-11-19 12:02:33.346428] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:30.039 BaseBdev1 00:10:30.039 12:02:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.039 12:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:30.039 12:02:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:30.039 12:02:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:30.039 12:02:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:30.039 12:02:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:30.039 12:02:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:30.039 12:02:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:30.039 12:02:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.039 12:02:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.039 12:02:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.039 12:02:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:30.039 12:02:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.039 12:02:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.039 [ 00:10:30.039 { 00:10:30.039 "name": "BaseBdev1", 00:10:30.039 "aliases": [ 00:10:30.039 "f3c6b917-82b9-4e0a-82de-00a2df349eb8" 00:10:30.039 ], 00:10:30.039 "product_name": "Malloc disk", 00:10:30.039 "block_size": 512, 00:10:30.039 "num_blocks": 65536, 00:10:30.039 "uuid": "f3c6b917-82b9-4e0a-82de-00a2df349eb8", 00:10:30.039 "assigned_rate_limits": { 00:10:30.039 "rw_ios_per_sec": 0, 00:10:30.039 "rw_mbytes_per_sec": 0, 00:10:30.039 "r_mbytes_per_sec": 0, 00:10:30.039 "w_mbytes_per_sec": 0 00:10:30.039 }, 00:10:30.039 "claimed": true, 00:10:30.039 "claim_type": "exclusive_write", 00:10:30.039 "zoned": false, 00:10:30.039 "supported_io_types": { 00:10:30.039 "read": true, 00:10:30.039 "write": true, 00:10:30.039 "unmap": true, 00:10:30.039 "flush": true, 00:10:30.039 "reset": true, 00:10:30.039 "nvme_admin": false, 00:10:30.039 "nvme_io": false, 00:10:30.039 "nvme_io_md": false, 00:10:30.039 "write_zeroes": true, 00:10:30.039 "zcopy": true, 00:10:30.039 "get_zone_info": false, 00:10:30.039 "zone_management": false, 00:10:30.039 "zone_append": false, 00:10:30.039 "compare": false, 00:10:30.039 "compare_and_write": false, 00:10:30.039 "abort": true, 00:10:30.039 "seek_hole": false, 00:10:30.039 "seek_data": false, 00:10:30.039 "copy": true, 00:10:30.039 "nvme_iov_md": false 00:10:30.039 }, 00:10:30.039 "memory_domains": [ 00:10:30.039 { 00:10:30.039 "dma_device_id": "system", 00:10:30.039 "dma_device_type": 1 00:10:30.039 }, 00:10:30.039 { 00:10:30.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.039 "dma_device_type": 2 00:10:30.039 } 00:10:30.039 ], 00:10:30.039 "driver_specific": {} 00:10:30.039 } 00:10:30.039 ] 00:10:30.039 12:02:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.039 12:02:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:30.039 12:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:30.039 12:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:30.039 12:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:30.039 12:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:30.039 12:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:30.039 12:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:30.039 12:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.039 12:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.039 12:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.039 12:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.039 12:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.039 12:02:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.039 12:02:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.039 12:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.039 12:02:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.300 12:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.300 "name": "Existed_Raid", 00:10:30.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.300 "strip_size_kb": 64, 00:10:30.300 "state": "configuring", 00:10:30.300 "raid_level": "raid0", 00:10:30.300 "superblock": false, 00:10:30.300 "num_base_bdevs": 4, 00:10:30.300 "num_base_bdevs_discovered": 3, 00:10:30.300 "num_base_bdevs_operational": 4, 00:10:30.300 "base_bdevs_list": [ 00:10:30.300 { 00:10:30.300 "name": "BaseBdev1", 00:10:30.300 "uuid": "f3c6b917-82b9-4e0a-82de-00a2df349eb8", 00:10:30.300 "is_configured": true, 00:10:30.300 "data_offset": 0, 00:10:30.300 "data_size": 65536 00:10:30.300 }, 00:10:30.300 { 00:10:30.300 "name": null, 00:10:30.300 "uuid": "e8d1bb62-0429-4bfb-90b3-c6a95077dad4", 00:10:30.300 "is_configured": false, 00:10:30.300 "data_offset": 0, 00:10:30.300 "data_size": 65536 00:10:30.300 }, 00:10:30.300 { 00:10:30.300 "name": "BaseBdev3", 00:10:30.300 "uuid": "eb27057f-a52a-46de-8ae7-443bb90c133d", 00:10:30.300 "is_configured": true, 00:10:30.300 "data_offset": 0, 00:10:30.300 "data_size": 65536 00:10:30.300 }, 00:10:30.300 { 00:10:30.300 "name": "BaseBdev4", 00:10:30.300 "uuid": "70e4eaad-4ffe-4690-a167-ed660087b524", 00:10:30.300 "is_configured": true, 00:10:30.300 "data_offset": 0, 00:10:30.300 "data_size": 65536 00:10:30.300 } 00:10:30.300 ] 00:10:30.300 }' 00:10:30.300 12:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.300 12:02:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.560 12:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.560 12:02:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.560 12:02:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.560 12:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:30.560 12:02:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.560 12:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:30.560 12:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:30.560 12:02:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.560 12:02:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.560 [2024-11-19 12:02:33.877611] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:30.560 12:02:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.560 12:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:30.560 12:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:30.560 12:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:30.560 12:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:30.560 12:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:30.560 12:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:30.560 12:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.560 12:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.560 12:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.560 12:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.560 12:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.560 12:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.560 12:02:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.560 12:02:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.560 12:02:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.561 12:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.561 "name": "Existed_Raid", 00:10:30.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.561 "strip_size_kb": 64, 00:10:30.561 "state": "configuring", 00:10:30.561 "raid_level": "raid0", 00:10:30.561 "superblock": false, 00:10:30.561 "num_base_bdevs": 4, 00:10:30.561 "num_base_bdevs_discovered": 2, 00:10:30.561 "num_base_bdevs_operational": 4, 00:10:30.561 "base_bdevs_list": [ 00:10:30.561 { 00:10:30.561 "name": "BaseBdev1", 00:10:30.561 "uuid": "f3c6b917-82b9-4e0a-82de-00a2df349eb8", 00:10:30.561 "is_configured": true, 00:10:30.561 "data_offset": 0, 00:10:30.561 "data_size": 65536 00:10:30.561 }, 00:10:30.561 { 00:10:30.561 "name": null, 00:10:30.561 "uuid": "e8d1bb62-0429-4bfb-90b3-c6a95077dad4", 00:10:30.561 "is_configured": false, 00:10:30.561 "data_offset": 0, 00:10:30.561 "data_size": 65536 00:10:30.561 }, 00:10:30.561 { 00:10:30.561 "name": null, 00:10:30.561 "uuid": "eb27057f-a52a-46de-8ae7-443bb90c133d", 00:10:30.561 "is_configured": false, 00:10:30.561 "data_offset": 0, 00:10:30.561 "data_size": 65536 00:10:30.561 }, 00:10:30.561 { 00:10:30.561 "name": "BaseBdev4", 00:10:30.561 "uuid": "70e4eaad-4ffe-4690-a167-ed660087b524", 00:10:30.561 "is_configured": true, 00:10:30.561 "data_offset": 0, 00:10:30.561 "data_size": 65536 00:10:30.561 } 00:10:30.561 ] 00:10:30.561 }' 00:10:30.561 12:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.561 12:02:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.131 12:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.131 12:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:31.131 12:02:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.131 12:02:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.131 12:02:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.131 12:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:31.131 12:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:31.131 12:02:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.131 12:02:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.131 [2024-11-19 12:02:34.352850] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:31.131 12:02:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.131 12:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:31.131 12:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:31.131 12:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:31.132 12:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:31.132 12:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:31.132 12:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:31.132 12:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.132 12:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.132 12:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.132 12:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.132 12:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.132 12:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.132 12:02:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.132 12:02:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.132 12:02:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.132 12:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.132 "name": "Existed_Raid", 00:10:31.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.132 "strip_size_kb": 64, 00:10:31.132 "state": "configuring", 00:10:31.132 "raid_level": "raid0", 00:10:31.132 "superblock": false, 00:10:31.132 "num_base_bdevs": 4, 00:10:31.132 "num_base_bdevs_discovered": 3, 00:10:31.132 "num_base_bdevs_operational": 4, 00:10:31.132 "base_bdevs_list": [ 00:10:31.132 { 00:10:31.132 "name": "BaseBdev1", 00:10:31.132 "uuid": "f3c6b917-82b9-4e0a-82de-00a2df349eb8", 00:10:31.132 "is_configured": true, 00:10:31.132 "data_offset": 0, 00:10:31.132 "data_size": 65536 00:10:31.132 }, 00:10:31.132 { 00:10:31.132 "name": null, 00:10:31.132 "uuid": "e8d1bb62-0429-4bfb-90b3-c6a95077dad4", 00:10:31.132 "is_configured": false, 00:10:31.132 "data_offset": 0, 00:10:31.132 "data_size": 65536 00:10:31.132 }, 00:10:31.132 { 00:10:31.132 "name": "BaseBdev3", 00:10:31.132 "uuid": "eb27057f-a52a-46de-8ae7-443bb90c133d", 00:10:31.132 "is_configured": true, 00:10:31.132 "data_offset": 0, 00:10:31.132 "data_size": 65536 00:10:31.132 }, 00:10:31.132 { 00:10:31.132 "name": "BaseBdev4", 00:10:31.132 "uuid": "70e4eaad-4ffe-4690-a167-ed660087b524", 00:10:31.132 "is_configured": true, 00:10:31.132 "data_offset": 0, 00:10:31.132 "data_size": 65536 00:10:31.132 } 00:10:31.132 ] 00:10:31.132 }' 00:10:31.132 12:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.132 12:02:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.391 12:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.391 12:02:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.391 12:02:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.391 12:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:31.391 12:02:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.651 12:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:31.651 12:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:31.651 12:02:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.651 12:02:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.651 [2024-11-19 12:02:34.808091] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:31.651 12:02:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.651 12:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:31.651 12:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:31.651 12:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:31.651 12:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:31.651 12:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:31.651 12:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:31.651 12:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.651 12:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.651 12:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.651 12:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.651 12:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.651 12:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.651 12:02:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.652 12:02:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.652 12:02:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.652 12:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.652 "name": "Existed_Raid", 00:10:31.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.652 "strip_size_kb": 64, 00:10:31.652 "state": "configuring", 00:10:31.652 "raid_level": "raid0", 00:10:31.652 "superblock": false, 00:10:31.652 "num_base_bdevs": 4, 00:10:31.652 "num_base_bdevs_discovered": 2, 00:10:31.652 "num_base_bdevs_operational": 4, 00:10:31.652 "base_bdevs_list": [ 00:10:31.652 { 00:10:31.652 "name": null, 00:10:31.652 "uuid": "f3c6b917-82b9-4e0a-82de-00a2df349eb8", 00:10:31.652 "is_configured": false, 00:10:31.652 "data_offset": 0, 00:10:31.652 "data_size": 65536 00:10:31.652 }, 00:10:31.652 { 00:10:31.652 "name": null, 00:10:31.652 "uuid": "e8d1bb62-0429-4bfb-90b3-c6a95077dad4", 00:10:31.652 "is_configured": false, 00:10:31.652 "data_offset": 0, 00:10:31.652 "data_size": 65536 00:10:31.652 }, 00:10:31.652 { 00:10:31.652 "name": "BaseBdev3", 00:10:31.652 "uuid": "eb27057f-a52a-46de-8ae7-443bb90c133d", 00:10:31.652 "is_configured": true, 00:10:31.652 "data_offset": 0, 00:10:31.652 "data_size": 65536 00:10:31.652 }, 00:10:31.652 { 00:10:31.652 "name": "BaseBdev4", 00:10:31.652 "uuid": "70e4eaad-4ffe-4690-a167-ed660087b524", 00:10:31.652 "is_configured": true, 00:10:31.652 "data_offset": 0, 00:10:31.652 "data_size": 65536 00:10:31.652 } 00:10:31.652 ] 00:10:31.652 }' 00:10:31.652 12:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.652 12:02:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.224 12:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.224 12:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:32.224 12:02:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.224 12:02:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.224 12:02:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.224 12:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:32.224 12:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:32.224 12:02:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.224 12:02:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.224 [2024-11-19 12:02:35.465389] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:32.224 12:02:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.224 12:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:32.224 12:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.224 12:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:32.224 12:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:32.224 12:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:32.224 12:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:32.224 12:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.224 12:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.224 12:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.224 12:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.224 12:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.224 12:02:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.224 12:02:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.224 12:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.224 12:02:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.224 12:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.224 "name": "Existed_Raid", 00:10:32.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.224 "strip_size_kb": 64, 00:10:32.224 "state": "configuring", 00:10:32.224 "raid_level": "raid0", 00:10:32.224 "superblock": false, 00:10:32.224 "num_base_bdevs": 4, 00:10:32.224 "num_base_bdevs_discovered": 3, 00:10:32.224 "num_base_bdevs_operational": 4, 00:10:32.224 "base_bdevs_list": [ 00:10:32.224 { 00:10:32.224 "name": null, 00:10:32.224 "uuid": "f3c6b917-82b9-4e0a-82de-00a2df349eb8", 00:10:32.224 "is_configured": false, 00:10:32.224 "data_offset": 0, 00:10:32.224 "data_size": 65536 00:10:32.224 }, 00:10:32.224 { 00:10:32.224 "name": "BaseBdev2", 00:10:32.224 "uuid": "e8d1bb62-0429-4bfb-90b3-c6a95077dad4", 00:10:32.224 "is_configured": true, 00:10:32.224 "data_offset": 0, 00:10:32.224 "data_size": 65536 00:10:32.224 }, 00:10:32.224 { 00:10:32.224 "name": "BaseBdev3", 00:10:32.224 "uuid": "eb27057f-a52a-46de-8ae7-443bb90c133d", 00:10:32.224 "is_configured": true, 00:10:32.224 "data_offset": 0, 00:10:32.224 "data_size": 65536 00:10:32.224 }, 00:10:32.224 { 00:10:32.224 "name": "BaseBdev4", 00:10:32.224 "uuid": "70e4eaad-4ffe-4690-a167-ed660087b524", 00:10:32.224 "is_configured": true, 00:10:32.224 "data_offset": 0, 00:10:32.224 "data_size": 65536 00:10:32.224 } 00:10:32.224 ] 00:10:32.224 }' 00:10:32.224 12:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.224 12:02:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.795 12:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:32.795 12:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.795 12:02:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.795 12:02:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.795 12:02:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.795 12:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:32.795 12:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:32.795 12:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.795 12:02:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.795 12:02:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.795 12:02:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.795 12:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f3c6b917-82b9-4e0a-82de-00a2df349eb8 00:10:32.795 12:02:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.795 12:02:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.795 [2024-11-19 12:02:36.068171] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:32.795 [2024-11-19 12:02:36.068227] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:32.795 [2024-11-19 12:02:36.068235] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:32.795 [2024-11-19 12:02:36.068494] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:32.795 [2024-11-19 12:02:36.068628] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:32.795 [2024-11-19 12:02:36.068649] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:32.795 [2024-11-19 12:02:36.068879] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:32.795 NewBaseBdev 00:10:32.795 12:02:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.795 12:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:32.795 12:02:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:32.795 12:02:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:32.795 12:02:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:32.795 12:02:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:32.795 12:02:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:32.795 12:02:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:32.795 12:02:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.795 12:02:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.795 12:02:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.795 12:02:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:32.795 12:02:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.795 12:02:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.795 [ 00:10:32.795 { 00:10:32.795 "name": "NewBaseBdev", 00:10:32.795 "aliases": [ 00:10:32.795 "f3c6b917-82b9-4e0a-82de-00a2df349eb8" 00:10:32.795 ], 00:10:32.796 "product_name": "Malloc disk", 00:10:32.796 "block_size": 512, 00:10:32.796 "num_blocks": 65536, 00:10:32.796 "uuid": "f3c6b917-82b9-4e0a-82de-00a2df349eb8", 00:10:32.796 "assigned_rate_limits": { 00:10:32.796 "rw_ios_per_sec": 0, 00:10:32.796 "rw_mbytes_per_sec": 0, 00:10:32.796 "r_mbytes_per_sec": 0, 00:10:32.796 "w_mbytes_per_sec": 0 00:10:32.796 }, 00:10:32.796 "claimed": true, 00:10:32.796 "claim_type": "exclusive_write", 00:10:32.796 "zoned": false, 00:10:32.796 "supported_io_types": { 00:10:32.796 "read": true, 00:10:32.796 "write": true, 00:10:32.796 "unmap": true, 00:10:32.796 "flush": true, 00:10:32.796 "reset": true, 00:10:32.796 "nvme_admin": false, 00:10:32.796 "nvme_io": false, 00:10:32.796 "nvme_io_md": false, 00:10:32.796 "write_zeroes": true, 00:10:32.796 "zcopy": true, 00:10:32.796 "get_zone_info": false, 00:10:32.796 "zone_management": false, 00:10:32.796 "zone_append": false, 00:10:32.796 "compare": false, 00:10:32.796 "compare_and_write": false, 00:10:32.796 "abort": true, 00:10:32.796 "seek_hole": false, 00:10:32.796 "seek_data": false, 00:10:32.796 "copy": true, 00:10:32.796 "nvme_iov_md": false 00:10:32.796 }, 00:10:32.796 "memory_domains": [ 00:10:32.796 { 00:10:32.796 "dma_device_id": "system", 00:10:32.796 "dma_device_type": 1 00:10:32.796 }, 00:10:32.796 { 00:10:32.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.796 "dma_device_type": 2 00:10:32.796 } 00:10:32.796 ], 00:10:32.796 "driver_specific": {} 00:10:32.796 } 00:10:32.796 ] 00:10:32.796 12:02:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.796 12:02:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:32.796 12:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:32.796 12:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.796 12:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:32.796 12:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:32.796 12:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:32.796 12:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:32.796 12:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.796 12:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.796 12:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.796 12:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.796 12:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.796 12:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.796 12:02:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.796 12:02:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.796 12:02:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.796 12:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.796 "name": "Existed_Raid", 00:10:32.796 "uuid": "dd4cda2e-35d2-44ef-8dae-49030e55341e", 00:10:32.796 "strip_size_kb": 64, 00:10:32.796 "state": "online", 00:10:32.796 "raid_level": "raid0", 00:10:32.796 "superblock": false, 00:10:32.796 "num_base_bdevs": 4, 00:10:32.796 "num_base_bdevs_discovered": 4, 00:10:32.796 "num_base_bdevs_operational": 4, 00:10:32.796 "base_bdevs_list": [ 00:10:32.796 { 00:10:32.796 "name": "NewBaseBdev", 00:10:32.796 "uuid": "f3c6b917-82b9-4e0a-82de-00a2df349eb8", 00:10:32.796 "is_configured": true, 00:10:32.796 "data_offset": 0, 00:10:32.796 "data_size": 65536 00:10:32.796 }, 00:10:32.796 { 00:10:32.796 "name": "BaseBdev2", 00:10:32.796 "uuid": "e8d1bb62-0429-4bfb-90b3-c6a95077dad4", 00:10:32.796 "is_configured": true, 00:10:32.796 "data_offset": 0, 00:10:32.796 "data_size": 65536 00:10:32.796 }, 00:10:32.796 { 00:10:32.796 "name": "BaseBdev3", 00:10:32.796 "uuid": "eb27057f-a52a-46de-8ae7-443bb90c133d", 00:10:32.796 "is_configured": true, 00:10:32.796 "data_offset": 0, 00:10:32.796 "data_size": 65536 00:10:32.796 }, 00:10:32.796 { 00:10:32.796 "name": "BaseBdev4", 00:10:32.796 "uuid": "70e4eaad-4ffe-4690-a167-ed660087b524", 00:10:32.796 "is_configured": true, 00:10:32.796 "data_offset": 0, 00:10:32.796 "data_size": 65536 00:10:32.796 } 00:10:32.796 ] 00:10:32.796 }' 00:10:32.796 12:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.796 12:02:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.366 12:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:33.366 12:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:33.366 12:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:33.366 12:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:33.366 12:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:33.366 12:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:33.366 12:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:33.366 12:02:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.366 12:02:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.366 12:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:33.366 [2024-11-19 12:02:36.531847] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:33.366 12:02:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.366 12:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:33.366 "name": "Existed_Raid", 00:10:33.366 "aliases": [ 00:10:33.366 "dd4cda2e-35d2-44ef-8dae-49030e55341e" 00:10:33.366 ], 00:10:33.366 "product_name": "Raid Volume", 00:10:33.366 "block_size": 512, 00:10:33.366 "num_blocks": 262144, 00:10:33.366 "uuid": "dd4cda2e-35d2-44ef-8dae-49030e55341e", 00:10:33.366 "assigned_rate_limits": { 00:10:33.366 "rw_ios_per_sec": 0, 00:10:33.366 "rw_mbytes_per_sec": 0, 00:10:33.366 "r_mbytes_per_sec": 0, 00:10:33.366 "w_mbytes_per_sec": 0 00:10:33.366 }, 00:10:33.366 "claimed": false, 00:10:33.366 "zoned": false, 00:10:33.366 "supported_io_types": { 00:10:33.366 "read": true, 00:10:33.367 "write": true, 00:10:33.367 "unmap": true, 00:10:33.367 "flush": true, 00:10:33.367 "reset": true, 00:10:33.367 "nvme_admin": false, 00:10:33.367 "nvme_io": false, 00:10:33.367 "nvme_io_md": false, 00:10:33.367 "write_zeroes": true, 00:10:33.367 "zcopy": false, 00:10:33.367 "get_zone_info": false, 00:10:33.367 "zone_management": false, 00:10:33.367 "zone_append": false, 00:10:33.367 "compare": false, 00:10:33.367 "compare_and_write": false, 00:10:33.367 "abort": false, 00:10:33.367 "seek_hole": false, 00:10:33.367 "seek_data": false, 00:10:33.367 "copy": false, 00:10:33.367 "nvme_iov_md": false 00:10:33.367 }, 00:10:33.367 "memory_domains": [ 00:10:33.367 { 00:10:33.367 "dma_device_id": "system", 00:10:33.367 "dma_device_type": 1 00:10:33.367 }, 00:10:33.367 { 00:10:33.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.367 "dma_device_type": 2 00:10:33.367 }, 00:10:33.367 { 00:10:33.367 "dma_device_id": "system", 00:10:33.367 "dma_device_type": 1 00:10:33.367 }, 00:10:33.367 { 00:10:33.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.367 "dma_device_type": 2 00:10:33.367 }, 00:10:33.367 { 00:10:33.367 "dma_device_id": "system", 00:10:33.367 "dma_device_type": 1 00:10:33.367 }, 00:10:33.367 { 00:10:33.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.367 "dma_device_type": 2 00:10:33.367 }, 00:10:33.367 { 00:10:33.367 "dma_device_id": "system", 00:10:33.367 "dma_device_type": 1 00:10:33.367 }, 00:10:33.367 { 00:10:33.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.367 "dma_device_type": 2 00:10:33.367 } 00:10:33.367 ], 00:10:33.367 "driver_specific": { 00:10:33.367 "raid": { 00:10:33.367 "uuid": "dd4cda2e-35d2-44ef-8dae-49030e55341e", 00:10:33.367 "strip_size_kb": 64, 00:10:33.367 "state": "online", 00:10:33.367 "raid_level": "raid0", 00:10:33.367 "superblock": false, 00:10:33.367 "num_base_bdevs": 4, 00:10:33.367 "num_base_bdevs_discovered": 4, 00:10:33.367 "num_base_bdevs_operational": 4, 00:10:33.367 "base_bdevs_list": [ 00:10:33.367 { 00:10:33.367 "name": "NewBaseBdev", 00:10:33.367 "uuid": "f3c6b917-82b9-4e0a-82de-00a2df349eb8", 00:10:33.367 "is_configured": true, 00:10:33.367 "data_offset": 0, 00:10:33.367 "data_size": 65536 00:10:33.367 }, 00:10:33.367 { 00:10:33.367 "name": "BaseBdev2", 00:10:33.367 "uuid": "e8d1bb62-0429-4bfb-90b3-c6a95077dad4", 00:10:33.367 "is_configured": true, 00:10:33.367 "data_offset": 0, 00:10:33.367 "data_size": 65536 00:10:33.367 }, 00:10:33.367 { 00:10:33.367 "name": "BaseBdev3", 00:10:33.367 "uuid": "eb27057f-a52a-46de-8ae7-443bb90c133d", 00:10:33.367 "is_configured": true, 00:10:33.367 "data_offset": 0, 00:10:33.367 "data_size": 65536 00:10:33.367 }, 00:10:33.367 { 00:10:33.367 "name": "BaseBdev4", 00:10:33.367 "uuid": "70e4eaad-4ffe-4690-a167-ed660087b524", 00:10:33.367 "is_configured": true, 00:10:33.367 "data_offset": 0, 00:10:33.367 "data_size": 65536 00:10:33.367 } 00:10:33.367 ] 00:10:33.367 } 00:10:33.367 } 00:10:33.367 }' 00:10:33.367 12:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:33.367 12:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:33.367 BaseBdev2 00:10:33.367 BaseBdev3 00:10:33.367 BaseBdev4' 00:10:33.367 12:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:33.367 12:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:33.367 12:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:33.367 12:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:33.367 12:02:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.367 12:02:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.367 12:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:33.367 12:02:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.367 12:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:33.367 12:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:33.367 12:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:33.367 12:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:33.367 12:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:33.367 12:02:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.367 12:02:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.367 12:02:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.367 12:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:33.367 12:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:33.367 12:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:33.367 12:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:33.367 12:02:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.367 12:02:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.367 12:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:33.627 12:02:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.627 12:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:33.627 12:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:33.627 12:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:33.627 12:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:33.627 12:02:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.627 12:02:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.627 12:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:33.627 12:02:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.627 12:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:33.627 12:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:33.627 12:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:33.627 12:02:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.627 12:02:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.627 [2024-11-19 12:02:36.831018] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:33.627 [2024-11-19 12:02:36.831093] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:33.627 [2024-11-19 12:02:36.831180] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:33.628 [2024-11-19 12:02:36.831244] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:33.628 [2024-11-19 12:02:36.831254] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:33.628 12:02:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.628 12:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69401 00:10:33.628 12:02:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 69401 ']' 00:10:33.628 12:02:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 69401 00:10:33.628 12:02:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:33.628 12:02:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:33.628 12:02:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69401 00:10:33.628 12:02:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:33.628 12:02:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:33.628 killing process with pid 69401 00:10:33.628 12:02:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69401' 00:10:33.628 12:02:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 69401 00:10:33.628 [2024-11-19 12:02:36.873348] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:33.628 12:02:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 69401 00:10:34.198 [2024-11-19 12:02:37.268231] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:35.141 12:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:35.141 00:10:35.141 real 0m11.318s 00:10:35.141 user 0m17.950s 00:10:35.141 sys 0m2.013s 00:10:35.141 12:02:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:35.141 12:02:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.141 ************************************ 00:10:35.141 END TEST raid_state_function_test 00:10:35.141 ************************************ 00:10:35.141 12:02:38 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:10:35.141 12:02:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:35.141 12:02:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:35.141 12:02:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:35.141 ************************************ 00:10:35.141 START TEST raid_state_function_test_sb 00:10:35.141 ************************************ 00:10:35.141 12:02:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:10:35.141 12:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:35.141 12:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:35.141 12:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:35.141 12:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:35.141 12:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:35.142 12:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:35.142 12:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:35.142 12:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:35.142 12:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:35.142 12:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:35.142 12:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:35.142 12:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:35.142 12:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:35.142 12:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:35.142 12:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:35.142 12:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:35.142 12:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:35.142 12:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:35.142 12:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:35.142 12:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:35.142 12:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:35.142 12:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:35.142 12:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:35.142 12:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:35.142 12:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:35.142 12:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:35.142 12:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:35.142 12:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:35.142 12:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:35.142 12:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70072 00:10:35.142 12:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:35.142 12:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70072' 00:10:35.142 Process raid pid: 70072 00:10:35.142 12:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70072 00:10:35.142 12:02:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 70072 ']' 00:10:35.142 12:02:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:35.142 12:02:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:35.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:35.142 12:02:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:35.142 12:02:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:35.142 12:02:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.401 [2024-11-19 12:02:38.527215] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:10:35.401 [2024-11-19 12:02:38.527333] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:35.401 [2024-11-19 12:02:38.699624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:35.661 [2024-11-19 12:02:38.819125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.661 [2024-11-19 12:02:39.016365] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:35.661 [2024-11-19 12:02:39.016413] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:36.244 12:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:36.244 12:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:36.244 12:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:36.244 12:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.244 12:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.244 [2024-11-19 12:02:39.364222] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:36.244 [2024-11-19 12:02:39.364284] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:36.244 [2024-11-19 12:02:39.364295] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:36.244 [2024-11-19 12:02:39.364304] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:36.244 [2024-11-19 12:02:39.364310] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:36.244 [2024-11-19 12:02:39.364319] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:36.244 [2024-11-19 12:02:39.364324] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:36.244 [2024-11-19 12:02:39.364332] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:36.244 12:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.244 12:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:36.244 12:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.244 12:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:36.244 12:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:36.244 12:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:36.244 12:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:36.244 12:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.244 12:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.244 12:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.244 12:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.244 12:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.244 12:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.244 12:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.244 12:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.244 12:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.244 12:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.244 "name": "Existed_Raid", 00:10:36.244 "uuid": "23ac931a-2e32-4e85-b56e-b3af219527e4", 00:10:36.244 "strip_size_kb": 64, 00:10:36.244 "state": "configuring", 00:10:36.244 "raid_level": "raid0", 00:10:36.244 "superblock": true, 00:10:36.244 "num_base_bdevs": 4, 00:10:36.244 "num_base_bdevs_discovered": 0, 00:10:36.244 "num_base_bdevs_operational": 4, 00:10:36.244 "base_bdevs_list": [ 00:10:36.244 { 00:10:36.244 "name": "BaseBdev1", 00:10:36.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.244 "is_configured": false, 00:10:36.244 "data_offset": 0, 00:10:36.244 "data_size": 0 00:10:36.244 }, 00:10:36.244 { 00:10:36.244 "name": "BaseBdev2", 00:10:36.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.244 "is_configured": false, 00:10:36.244 "data_offset": 0, 00:10:36.244 "data_size": 0 00:10:36.244 }, 00:10:36.244 { 00:10:36.244 "name": "BaseBdev3", 00:10:36.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.244 "is_configured": false, 00:10:36.244 "data_offset": 0, 00:10:36.244 "data_size": 0 00:10:36.244 }, 00:10:36.244 { 00:10:36.244 "name": "BaseBdev4", 00:10:36.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.244 "is_configured": false, 00:10:36.244 "data_offset": 0, 00:10:36.244 "data_size": 0 00:10:36.244 } 00:10:36.244 ] 00:10:36.244 }' 00:10:36.244 12:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.244 12:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.503 12:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:36.503 12:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.503 12:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.503 [2024-11-19 12:02:39.787432] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:36.503 [2024-11-19 12:02:39.787483] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:36.503 12:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.503 12:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:36.503 12:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.503 12:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.503 [2024-11-19 12:02:39.795396] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:36.503 [2024-11-19 12:02:39.795440] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:36.503 [2024-11-19 12:02:39.795449] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:36.503 [2024-11-19 12:02:39.795458] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:36.503 [2024-11-19 12:02:39.795464] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:36.503 [2024-11-19 12:02:39.795473] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:36.503 [2024-11-19 12:02:39.795484] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:36.503 [2024-11-19 12:02:39.795493] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:36.503 12:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.503 12:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:36.503 12:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.503 12:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.503 [2024-11-19 12:02:39.840919] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:36.503 BaseBdev1 00:10:36.503 12:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.503 12:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:36.503 12:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:36.503 12:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:36.503 12:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:36.503 12:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:36.503 12:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:36.503 12:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:36.503 12:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.503 12:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.503 12:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.503 12:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:36.503 12:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.503 12:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.503 [ 00:10:36.503 { 00:10:36.503 "name": "BaseBdev1", 00:10:36.503 "aliases": [ 00:10:36.503 "06a35ddd-ad8f-4cde-8c20-fbaeeaaaa1d7" 00:10:36.503 ], 00:10:36.503 "product_name": "Malloc disk", 00:10:36.503 "block_size": 512, 00:10:36.503 "num_blocks": 65536, 00:10:36.503 "uuid": "06a35ddd-ad8f-4cde-8c20-fbaeeaaaa1d7", 00:10:36.503 "assigned_rate_limits": { 00:10:36.503 "rw_ios_per_sec": 0, 00:10:36.503 "rw_mbytes_per_sec": 0, 00:10:36.503 "r_mbytes_per_sec": 0, 00:10:36.503 "w_mbytes_per_sec": 0 00:10:36.503 }, 00:10:36.503 "claimed": true, 00:10:36.503 "claim_type": "exclusive_write", 00:10:36.503 "zoned": false, 00:10:36.503 "supported_io_types": { 00:10:36.503 "read": true, 00:10:36.503 "write": true, 00:10:36.503 "unmap": true, 00:10:36.503 "flush": true, 00:10:36.503 "reset": true, 00:10:36.503 "nvme_admin": false, 00:10:36.503 "nvme_io": false, 00:10:36.503 "nvme_io_md": false, 00:10:36.503 "write_zeroes": true, 00:10:36.503 "zcopy": true, 00:10:36.503 "get_zone_info": false, 00:10:36.503 "zone_management": false, 00:10:36.503 "zone_append": false, 00:10:36.503 "compare": false, 00:10:36.503 "compare_and_write": false, 00:10:36.503 "abort": true, 00:10:36.503 "seek_hole": false, 00:10:36.503 "seek_data": false, 00:10:36.503 "copy": true, 00:10:36.503 "nvme_iov_md": false 00:10:36.503 }, 00:10:36.503 "memory_domains": [ 00:10:36.503 { 00:10:36.503 "dma_device_id": "system", 00:10:36.503 "dma_device_type": 1 00:10:36.503 }, 00:10:36.503 { 00:10:36.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.503 "dma_device_type": 2 00:10:36.503 } 00:10:36.503 ], 00:10:36.503 "driver_specific": {} 00:10:36.503 } 00:10:36.503 ] 00:10:36.503 12:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.503 12:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:36.504 12:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:36.504 12:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.504 12:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:36.763 12:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:36.763 12:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:36.763 12:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:36.763 12:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.763 12:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.763 12:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.763 12:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.763 12:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.763 12:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.763 12:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.763 12:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.763 12:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.763 12:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.763 "name": "Existed_Raid", 00:10:36.763 "uuid": "979e567a-032c-4d28-a5c1-57bdcf81a860", 00:10:36.763 "strip_size_kb": 64, 00:10:36.763 "state": "configuring", 00:10:36.763 "raid_level": "raid0", 00:10:36.763 "superblock": true, 00:10:36.763 "num_base_bdevs": 4, 00:10:36.763 "num_base_bdevs_discovered": 1, 00:10:36.763 "num_base_bdevs_operational": 4, 00:10:36.763 "base_bdevs_list": [ 00:10:36.763 { 00:10:36.763 "name": "BaseBdev1", 00:10:36.763 "uuid": "06a35ddd-ad8f-4cde-8c20-fbaeeaaaa1d7", 00:10:36.763 "is_configured": true, 00:10:36.763 "data_offset": 2048, 00:10:36.763 "data_size": 63488 00:10:36.763 }, 00:10:36.763 { 00:10:36.763 "name": "BaseBdev2", 00:10:36.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.763 "is_configured": false, 00:10:36.763 "data_offset": 0, 00:10:36.763 "data_size": 0 00:10:36.763 }, 00:10:36.763 { 00:10:36.763 "name": "BaseBdev3", 00:10:36.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.763 "is_configured": false, 00:10:36.763 "data_offset": 0, 00:10:36.763 "data_size": 0 00:10:36.763 }, 00:10:36.763 { 00:10:36.763 "name": "BaseBdev4", 00:10:36.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.763 "is_configured": false, 00:10:36.763 "data_offset": 0, 00:10:36.763 "data_size": 0 00:10:36.763 } 00:10:36.763 ] 00:10:36.763 }' 00:10:36.763 12:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.763 12:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.022 12:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:37.022 12:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.022 12:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.022 [2024-11-19 12:02:40.320169] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:37.022 [2024-11-19 12:02:40.320241] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:37.022 12:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.023 12:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:37.023 12:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.023 12:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.023 [2024-11-19 12:02:40.332181] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:37.023 [2024-11-19 12:02:40.333873] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:37.023 [2024-11-19 12:02:40.333932] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:37.023 [2024-11-19 12:02:40.333943] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:37.023 [2024-11-19 12:02:40.333953] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:37.023 [2024-11-19 12:02:40.333959] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:37.023 [2024-11-19 12:02:40.333967] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:37.023 12:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.023 12:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:37.023 12:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:37.023 12:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:37.023 12:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:37.023 12:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:37.023 12:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:37.023 12:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:37.023 12:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:37.023 12:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.023 12:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.023 12:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.023 12:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.023 12:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.023 12:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.023 12:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.023 12:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.023 12:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.023 12:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.023 "name": "Existed_Raid", 00:10:37.023 "uuid": "5bfcd016-70f0-4505-86f0-863cff39a569", 00:10:37.023 "strip_size_kb": 64, 00:10:37.023 "state": "configuring", 00:10:37.023 "raid_level": "raid0", 00:10:37.023 "superblock": true, 00:10:37.023 "num_base_bdevs": 4, 00:10:37.023 "num_base_bdevs_discovered": 1, 00:10:37.023 "num_base_bdevs_operational": 4, 00:10:37.023 "base_bdevs_list": [ 00:10:37.023 { 00:10:37.023 "name": "BaseBdev1", 00:10:37.023 "uuid": "06a35ddd-ad8f-4cde-8c20-fbaeeaaaa1d7", 00:10:37.023 "is_configured": true, 00:10:37.023 "data_offset": 2048, 00:10:37.023 "data_size": 63488 00:10:37.023 }, 00:10:37.023 { 00:10:37.023 "name": "BaseBdev2", 00:10:37.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.023 "is_configured": false, 00:10:37.023 "data_offset": 0, 00:10:37.023 "data_size": 0 00:10:37.023 }, 00:10:37.023 { 00:10:37.023 "name": "BaseBdev3", 00:10:37.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.023 "is_configured": false, 00:10:37.023 "data_offset": 0, 00:10:37.023 "data_size": 0 00:10:37.023 }, 00:10:37.023 { 00:10:37.023 "name": "BaseBdev4", 00:10:37.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.023 "is_configured": false, 00:10:37.023 "data_offset": 0, 00:10:37.023 "data_size": 0 00:10:37.023 } 00:10:37.023 ] 00:10:37.023 }' 00:10:37.023 12:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.023 12:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.592 12:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:37.593 12:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.593 12:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.593 [2024-11-19 12:02:40.762332] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:37.593 BaseBdev2 00:10:37.593 12:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.593 12:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:37.593 12:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:37.593 12:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:37.593 12:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:37.593 12:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:37.593 12:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:37.593 12:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:37.593 12:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.593 12:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.593 12:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.593 12:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:37.593 12:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.593 12:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.593 [ 00:10:37.593 { 00:10:37.593 "name": "BaseBdev2", 00:10:37.593 "aliases": [ 00:10:37.593 "b55999df-cf11-4c26-9e6c-fbe40349b300" 00:10:37.593 ], 00:10:37.593 "product_name": "Malloc disk", 00:10:37.593 "block_size": 512, 00:10:37.593 "num_blocks": 65536, 00:10:37.593 "uuid": "b55999df-cf11-4c26-9e6c-fbe40349b300", 00:10:37.593 "assigned_rate_limits": { 00:10:37.593 "rw_ios_per_sec": 0, 00:10:37.593 "rw_mbytes_per_sec": 0, 00:10:37.593 "r_mbytes_per_sec": 0, 00:10:37.593 "w_mbytes_per_sec": 0 00:10:37.593 }, 00:10:37.593 "claimed": true, 00:10:37.593 "claim_type": "exclusive_write", 00:10:37.593 "zoned": false, 00:10:37.593 "supported_io_types": { 00:10:37.593 "read": true, 00:10:37.593 "write": true, 00:10:37.593 "unmap": true, 00:10:37.593 "flush": true, 00:10:37.593 "reset": true, 00:10:37.593 "nvme_admin": false, 00:10:37.593 "nvme_io": false, 00:10:37.593 "nvme_io_md": false, 00:10:37.593 "write_zeroes": true, 00:10:37.593 "zcopy": true, 00:10:37.593 "get_zone_info": false, 00:10:37.593 "zone_management": false, 00:10:37.593 "zone_append": false, 00:10:37.593 "compare": false, 00:10:37.593 "compare_and_write": false, 00:10:37.593 "abort": true, 00:10:37.593 "seek_hole": false, 00:10:37.593 "seek_data": false, 00:10:37.593 "copy": true, 00:10:37.593 "nvme_iov_md": false 00:10:37.593 }, 00:10:37.593 "memory_domains": [ 00:10:37.593 { 00:10:37.593 "dma_device_id": "system", 00:10:37.593 "dma_device_type": 1 00:10:37.593 }, 00:10:37.593 { 00:10:37.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.593 "dma_device_type": 2 00:10:37.593 } 00:10:37.593 ], 00:10:37.593 "driver_specific": {} 00:10:37.593 } 00:10:37.593 ] 00:10:37.593 12:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.593 12:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:37.593 12:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:37.593 12:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:37.593 12:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:37.593 12:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:37.593 12:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:37.593 12:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:37.593 12:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:37.593 12:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:37.593 12:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.593 12:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.593 12:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.593 12:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.593 12:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.593 12:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.593 12:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.593 12:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.593 12:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.593 12:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.593 "name": "Existed_Raid", 00:10:37.593 "uuid": "5bfcd016-70f0-4505-86f0-863cff39a569", 00:10:37.593 "strip_size_kb": 64, 00:10:37.593 "state": "configuring", 00:10:37.593 "raid_level": "raid0", 00:10:37.593 "superblock": true, 00:10:37.593 "num_base_bdevs": 4, 00:10:37.593 "num_base_bdevs_discovered": 2, 00:10:37.593 "num_base_bdevs_operational": 4, 00:10:37.593 "base_bdevs_list": [ 00:10:37.593 { 00:10:37.593 "name": "BaseBdev1", 00:10:37.593 "uuid": "06a35ddd-ad8f-4cde-8c20-fbaeeaaaa1d7", 00:10:37.593 "is_configured": true, 00:10:37.593 "data_offset": 2048, 00:10:37.593 "data_size": 63488 00:10:37.593 }, 00:10:37.593 { 00:10:37.593 "name": "BaseBdev2", 00:10:37.593 "uuid": "b55999df-cf11-4c26-9e6c-fbe40349b300", 00:10:37.593 "is_configured": true, 00:10:37.593 "data_offset": 2048, 00:10:37.593 "data_size": 63488 00:10:37.593 }, 00:10:37.593 { 00:10:37.593 "name": "BaseBdev3", 00:10:37.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.593 "is_configured": false, 00:10:37.593 "data_offset": 0, 00:10:37.593 "data_size": 0 00:10:37.593 }, 00:10:37.593 { 00:10:37.593 "name": "BaseBdev4", 00:10:37.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.593 "is_configured": false, 00:10:37.593 "data_offset": 0, 00:10:37.593 "data_size": 0 00:10:37.593 } 00:10:37.593 ] 00:10:37.593 }' 00:10:37.593 12:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.593 12:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.852 12:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:37.852 12:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.852 12:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.111 [2024-11-19 12:02:41.274118] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:38.111 BaseBdev3 00:10:38.111 12:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.111 12:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:38.111 12:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:38.111 12:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:38.111 12:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:38.111 12:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:38.111 12:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:38.111 12:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:38.111 12:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.111 12:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.111 12:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.111 12:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:38.111 12:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.111 12:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.111 [ 00:10:38.111 { 00:10:38.111 "name": "BaseBdev3", 00:10:38.111 "aliases": [ 00:10:38.111 "ce4a5194-93b1-4e4c-a6a9-33915fa57b56" 00:10:38.111 ], 00:10:38.111 "product_name": "Malloc disk", 00:10:38.111 "block_size": 512, 00:10:38.111 "num_blocks": 65536, 00:10:38.111 "uuid": "ce4a5194-93b1-4e4c-a6a9-33915fa57b56", 00:10:38.111 "assigned_rate_limits": { 00:10:38.111 "rw_ios_per_sec": 0, 00:10:38.111 "rw_mbytes_per_sec": 0, 00:10:38.111 "r_mbytes_per_sec": 0, 00:10:38.111 "w_mbytes_per_sec": 0 00:10:38.111 }, 00:10:38.111 "claimed": true, 00:10:38.111 "claim_type": "exclusive_write", 00:10:38.111 "zoned": false, 00:10:38.111 "supported_io_types": { 00:10:38.111 "read": true, 00:10:38.111 "write": true, 00:10:38.111 "unmap": true, 00:10:38.111 "flush": true, 00:10:38.111 "reset": true, 00:10:38.111 "nvme_admin": false, 00:10:38.111 "nvme_io": false, 00:10:38.111 "nvme_io_md": false, 00:10:38.111 "write_zeroes": true, 00:10:38.111 "zcopy": true, 00:10:38.111 "get_zone_info": false, 00:10:38.111 "zone_management": false, 00:10:38.111 "zone_append": false, 00:10:38.111 "compare": false, 00:10:38.111 "compare_and_write": false, 00:10:38.111 "abort": true, 00:10:38.111 "seek_hole": false, 00:10:38.111 "seek_data": false, 00:10:38.111 "copy": true, 00:10:38.111 "nvme_iov_md": false 00:10:38.111 }, 00:10:38.111 "memory_domains": [ 00:10:38.111 { 00:10:38.111 "dma_device_id": "system", 00:10:38.111 "dma_device_type": 1 00:10:38.111 }, 00:10:38.111 { 00:10:38.111 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.111 "dma_device_type": 2 00:10:38.111 } 00:10:38.111 ], 00:10:38.111 "driver_specific": {} 00:10:38.111 } 00:10:38.111 ] 00:10:38.111 12:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.111 12:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:38.111 12:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:38.111 12:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:38.111 12:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:38.111 12:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.111 12:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.111 12:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:38.111 12:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.111 12:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:38.111 12:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.111 12:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.111 12:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.111 12:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.111 12:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.111 12:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.111 12:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.111 12:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.112 12:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.112 12:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.112 "name": "Existed_Raid", 00:10:38.112 "uuid": "5bfcd016-70f0-4505-86f0-863cff39a569", 00:10:38.112 "strip_size_kb": 64, 00:10:38.112 "state": "configuring", 00:10:38.112 "raid_level": "raid0", 00:10:38.112 "superblock": true, 00:10:38.112 "num_base_bdevs": 4, 00:10:38.112 "num_base_bdevs_discovered": 3, 00:10:38.112 "num_base_bdevs_operational": 4, 00:10:38.112 "base_bdevs_list": [ 00:10:38.112 { 00:10:38.112 "name": "BaseBdev1", 00:10:38.112 "uuid": "06a35ddd-ad8f-4cde-8c20-fbaeeaaaa1d7", 00:10:38.112 "is_configured": true, 00:10:38.112 "data_offset": 2048, 00:10:38.112 "data_size": 63488 00:10:38.112 }, 00:10:38.112 { 00:10:38.112 "name": "BaseBdev2", 00:10:38.112 "uuid": "b55999df-cf11-4c26-9e6c-fbe40349b300", 00:10:38.112 "is_configured": true, 00:10:38.112 "data_offset": 2048, 00:10:38.112 "data_size": 63488 00:10:38.112 }, 00:10:38.112 { 00:10:38.112 "name": "BaseBdev3", 00:10:38.112 "uuid": "ce4a5194-93b1-4e4c-a6a9-33915fa57b56", 00:10:38.112 "is_configured": true, 00:10:38.112 "data_offset": 2048, 00:10:38.112 "data_size": 63488 00:10:38.112 }, 00:10:38.112 { 00:10:38.112 "name": "BaseBdev4", 00:10:38.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.112 "is_configured": false, 00:10:38.112 "data_offset": 0, 00:10:38.112 "data_size": 0 00:10:38.112 } 00:10:38.112 ] 00:10:38.112 }' 00:10:38.112 12:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.112 12:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.680 12:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:38.680 12:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.680 12:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.680 [2024-11-19 12:02:41.809399] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:38.680 [2024-11-19 12:02:41.809670] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:38.680 [2024-11-19 12:02:41.809687] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:38.680 [2024-11-19 12:02:41.809935] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:38.680 [2024-11-19 12:02:41.810118] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:38.680 [2024-11-19 12:02:41.810137] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:38.680 [2024-11-19 12:02:41.810280] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:38.680 BaseBdev4 00:10:38.680 12:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.680 12:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:38.680 12:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:38.680 12:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:38.680 12:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:38.680 12:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:38.680 12:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:38.680 12:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:38.680 12:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.680 12:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.680 12:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.680 12:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:38.680 12:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.680 12:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.680 [ 00:10:38.680 { 00:10:38.680 "name": "BaseBdev4", 00:10:38.680 "aliases": [ 00:10:38.680 "199703b5-d7f6-4fbd-9fad-26e37f966d39" 00:10:38.680 ], 00:10:38.680 "product_name": "Malloc disk", 00:10:38.680 "block_size": 512, 00:10:38.680 "num_blocks": 65536, 00:10:38.680 "uuid": "199703b5-d7f6-4fbd-9fad-26e37f966d39", 00:10:38.680 "assigned_rate_limits": { 00:10:38.680 "rw_ios_per_sec": 0, 00:10:38.680 "rw_mbytes_per_sec": 0, 00:10:38.680 "r_mbytes_per_sec": 0, 00:10:38.680 "w_mbytes_per_sec": 0 00:10:38.680 }, 00:10:38.680 "claimed": true, 00:10:38.680 "claim_type": "exclusive_write", 00:10:38.680 "zoned": false, 00:10:38.680 "supported_io_types": { 00:10:38.680 "read": true, 00:10:38.680 "write": true, 00:10:38.680 "unmap": true, 00:10:38.680 "flush": true, 00:10:38.680 "reset": true, 00:10:38.680 "nvme_admin": false, 00:10:38.680 "nvme_io": false, 00:10:38.680 "nvme_io_md": false, 00:10:38.680 "write_zeroes": true, 00:10:38.680 "zcopy": true, 00:10:38.680 "get_zone_info": false, 00:10:38.680 "zone_management": false, 00:10:38.680 "zone_append": false, 00:10:38.681 "compare": false, 00:10:38.681 "compare_and_write": false, 00:10:38.681 "abort": true, 00:10:38.681 "seek_hole": false, 00:10:38.681 "seek_data": false, 00:10:38.681 "copy": true, 00:10:38.681 "nvme_iov_md": false 00:10:38.681 }, 00:10:38.681 "memory_domains": [ 00:10:38.681 { 00:10:38.681 "dma_device_id": "system", 00:10:38.681 "dma_device_type": 1 00:10:38.681 }, 00:10:38.681 { 00:10:38.681 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.681 "dma_device_type": 2 00:10:38.681 } 00:10:38.681 ], 00:10:38.681 "driver_specific": {} 00:10:38.681 } 00:10:38.681 ] 00:10:38.681 12:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.681 12:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:38.681 12:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:38.681 12:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:38.681 12:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:38.681 12:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.681 12:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:38.681 12:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:38.681 12:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.681 12:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:38.681 12:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.681 12:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.681 12:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.681 12:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.681 12:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.681 12:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.681 12:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.681 12:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.681 12:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.681 12:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.681 "name": "Existed_Raid", 00:10:38.681 "uuid": "5bfcd016-70f0-4505-86f0-863cff39a569", 00:10:38.681 "strip_size_kb": 64, 00:10:38.681 "state": "online", 00:10:38.681 "raid_level": "raid0", 00:10:38.681 "superblock": true, 00:10:38.681 "num_base_bdevs": 4, 00:10:38.681 "num_base_bdevs_discovered": 4, 00:10:38.681 "num_base_bdevs_operational": 4, 00:10:38.681 "base_bdevs_list": [ 00:10:38.681 { 00:10:38.681 "name": "BaseBdev1", 00:10:38.681 "uuid": "06a35ddd-ad8f-4cde-8c20-fbaeeaaaa1d7", 00:10:38.681 "is_configured": true, 00:10:38.681 "data_offset": 2048, 00:10:38.681 "data_size": 63488 00:10:38.681 }, 00:10:38.681 { 00:10:38.681 "name": "BaseBdev2", 00:10:38.681 "uuid": "b55999df-cf11-4c26-9e6c-fbe40349b300", 00:10:38.681 "is_configured": true, 00:10:38.681 "data_offset": 2048, 00:10:38.681 "data_size": 63488 00:10:38.681 }, 00:10:38.681 { 00:10:38.681 "name": "BaseBdev3", 00:10:38.681 "uuid": "ce4a5194-93b1-4e4c-a6a9-33915fa57b56", 00:10:38.681 "is_configured": true, 00:10:38.681 "data_offset": 2048, 00:10:38.681 "data_size": 63488 00:10:38.681 }, 00:10:38.681 { 00:10:38.681 "name": "BaseBdev4", 00:10:38.681 "uuid": "199703b5-d7f6-4fbd-9fad-26e37f966d39", 00:10:38.681 "is_configured": true, 00:10:38.681 "data_offset": 2048, 00:10:38.681 "data_size": 63488 00:10:38.681 } 00:10:38.681 ] 00:10:38.681 }' 00:10:38.681 12:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.681 12:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.940 12:02:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:38.940 12:02:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:38.940 12:02:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:38.940 12:02:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:38.940 12:02:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:38.940 12:02:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:38.940 12:02:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:38.940 12:02:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.940 12:02:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.199 12:02:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:39.199 [2024-11-19 12:02:42.316933] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:39.199 12:02:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.199 12:02:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:39.199 "name": "Existed_Raid", 00:10:39.199 "aliases": [ 00:10:39.199 "5bfcd016-70f0-4505-86f0-863cff39a569" 00:10:39.199 ], 00:10:39.199 "product_name": "Raid Volume", 00:10:39.199 "block_size": 512, 00:10:39.199 "num_blocks": 253952, 00:10:39.199 "uuid": "5bfcd016-70f0-4505-86f0-863cff39a569", 00:10:39.199 "assigned_rate_limits": { 00:10:39.200 "rw_ios_per_sec": 0, 00:10:39.200 "rw_mbytes_per_sec": 0, 00:10:39.200 "r_mbytes_per_sec": 0, 00:10:39.200 "w_mbytes_per_sec": 0 00:10:39.200 }, 00:10:39.200 "claimed": false, 00:10:39.200 "zoned": false, 00:10:39.200 "supported_io_types": { 00:10:39.200 "read": true, 00:10:39.200 "write": true, 00:10:39.200 "unmap": true, 00:10:39.200 "flush": true, 00:10:39.200 "reset": true, 00:10:39.200 "nvme_admin": false, 00:10:39.200 "nvme_io": false, 00:10:39.200 "nvme_io_md": false, 00:10:39.200 "write_zeroes": true, 00:10:39.200 "zcopy": false, 00:10:39.200 "get_zone_info": false, 00:10:39.200 "zone_management": false, 00:10:39.200 "zone_append": false, 00:10:39.200 "compare": false, 00:10:39.200 "compare_and_write": false, 00:10:39.200 "abort": false, 00:10:39.200 "seek_hole": false, 00:10:39.200 "seek_data": false, 00:10:39.200 "copy": false, 00:10:39.200 "nvme_iov_md": false 00:10:39.200 }, 00:10:39.200 "memory_domains": [ 00:10:39.200 { 00:10:39.200 "dma_device_id": "system", 00:10:39.200 "dma_device_type": 1 00:10:39.200 }, 00:10:39.200 { 00:10:39.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.200 "dma_device_type": 2 00:10:39.200 }, 00:10:39.200 { 00:10:39.200 "dma_device_id": "system", 00:10:39.200 "dma_device_type": 1 00:10:39.200 }, 00:10:39.200 { 00:10:39.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.200 "dma_device_type": 2 00:10:39.200 }, 00:10:39.200 { 00:10:39.200 "dma_device_id": "system", 00:10:39.200 "dma_device_type": 1 00:10:39.200 }, 00:10:39.200 { 00:10:39.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.200 "dma_device_type": 2 00:10:39.200 }, 00:10:39.200 { 00:10:39.200 "dma_device_id": "system", 00:10:39.200 "dma_device_type": 1 00:10:39.200 }, 00:10:39.200 { 00:10:39.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.200 "dma_device_type": 2 00:10:39.200 } 00:10:39.200 ], 00:10:39.200 "driver_specific": { 00:10:39.200 "raid": { 00:10:39.200 "uuid": "5bfcd016-70f0-4505-86f0-863cff39a569", 00:10:39.200 "strip_size_kb": 64, 00:10:39.200 "state": "online", 00:10:39.200 "raid_level": "raid0", 00:10:39.200 "superblock": true, 00:10:39.200 "num_base_bdevs": 4, 00:10:39.200 "num_base_bdevs_discovered": 4, 00:10:39.200 "num_base_bdevs_operational": 4, 00:10:39.200 "base_bdevs_list": [ 00:10:39.200 { 00:10:39.200 "name": "BaseBdev1", 00:10:39.200 "uuid": "06a35ddd-ad8f-4cde-8c20-fbaeeaaaa1d7", 00:10:39.200 "is_configured": true, 00:10:39.200 "data_offset": 2048, 00:10:39.200 "data_size": 63488 00:10:39.200 }, 00:10:39.200 { 00:10:39.200 "name": "BaseBdev2", 00:10:39.200 "uuid": "b55999df-cf11-4c26-9e6c-fbe40349b300", 00:10:39.200 "is_configured": true, 00:10:39.200 "data_offset": 2048, 00:10:39.200 "data_size": 63488 00:10:39.200 }, 00:10:39.200 { 00:10:39.200 "name": "BaseBdev3", 00:10:39.200 "uuid": "ce4a5194-93b1-4e4c-a6a9-33915fa57b56", 00:10:39.200 "is_configured": true, 00:10:39.200 "data_offset": 2048, 00:10:39.200 "data_size": 63488 00:10:39.200 }, 00:10:39.200 { 00:10:39.200 "name": "BaseBdev4", 00:10:39.200 "uuid": "199703b5-d7f6-4fbd-9fad-26e37f966d39", 00:10:39.200 "is_configured": true, 00:10:39.200 "data_offset": 2048, 00:10:39.200 "data_size": 63488 00:10:39.200 } 00:10:39.200 ] 00:10:39.200 } 00:10:39.200 } 00:10:39.200 }' 00:10:39.200 12:02:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:39.200 12:02:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:39.200 BaseBdev2 00:10:39.200 BaseBdev3 00:10:39.200 BaseBdev4' 00:10:39.200 12:02:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.200 12:02:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:39.200 12:02:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:39.200 12:02:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:39.200 12:02:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.200 12:02:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.200 12:02:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.200 12:02:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.200 12:02:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:39.200 12:02:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:39.200 12:02:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:39.200 12:02:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:39.200 12:02:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.200 12:02:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.200 12:02:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.200 12:02:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.200 12:02:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:39.200 12:02:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:39.200 12:02:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:39.200 12:02:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:39.200 12:02:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.200 12:02:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.200 12:02:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.200 12:02:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.459 12:02:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:39.459 12:02:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:39.459 12:02:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:39.459 12:02:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:39.459 12:02:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.459 12:02:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.460 12:02:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.460 12:02:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.460 12:02:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:39.460 12:02:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:39.460 12:02:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:39.460 12:02:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.460 12:02:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.460 [2024-11-19 12:02:42.640091] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:39.460 [2024-11-19 12:02:42.640136] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:39.460 [2024-11-19 12:02:42.640186] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:39.460 12:02:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.460 12:02:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:39.460 12:02:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:39.460 12:02:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:39.460 12:02:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:39.460 12:02:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:39.460 12:02:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:39.460 12:02:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.460 12:02:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:39.460 12:02:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:39.460 12:02:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.460 12:02:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:39.460 12:02:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.460 12:02:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.460 12:02:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.460 12:02:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.460 12:02:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.460 12:02:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.460 12:02:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.460 12:02:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.460 12:02:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.460 12:02:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.460 "name": "Existed_Raid", 00:10:39.460 "uuid": "5bfcd016-70f0-4505-86f0-863cff39a569", 00:10:39.460 "strip_size_kb": 64, 00:10:39.460 "state": "offline", 00:10:39.460 "raid_level": "raid0", 00:10:39.460 "superblock": true, 00:10:39.460 "num_base_bdevs": 4, 00:10:39.460 "num_base_bdevs_discovered": 3, 00:10:39.460 "num_base_bdevs_operational": 3, 00:10:39.460 "base_bdevs_list": [ 00:10:39.460 { 00:10:39.460 "name": null, 00:10:39.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.460 "is_configured": false, 00:10:39.460 "data_offset": 0, 00:10:39.460 "data_size": 63488 00:10:39.460 }, 00:10:39.460 { 00:10:39.460 "name": "BaseBdev2", 00:10:39.460 "uuid": "b55999df-cf11-4c26-9e6c-fbe40349b300", 00:10:39.460 "is_configured": true, 00:10:39.460 "data_offset": 2048, 00:10:39.460 "data_size": 63488 00:10:39.460 }, 00:10:39.460 { 00:10:39.460 "name": "BaseBdev3", 00:10:39.460 "uuid": "ce4a5194-93b1-4e4c-a6a9-33915fa57b56", 00:10:39.460 "is_configured": true, 00:10:39.460 "data_offset": 2048, 00:10:39.460 "data_size": 63488 00:10:39.460 }, 00:10:39.460 { 00:10:39.460 "name": "BaseBdev4", 00:10:39.460 "uuid": "199703b5-d7f6-4fbd-9fad-26e37f966d39", 00:10:39.460 "is_configured": true, 00:10:39.460 "data_offset": 2048, 00:10:39.460 "data_size": 63488 00:10:39.460 } 00:10:39.460 ] 00:10:39.460 }' 00:10:39.460 12:02:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.460 12:02:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.030 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:40.030 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:40.030 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:40.030 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.030 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.030 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.030 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.030 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:40.030 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:40.030 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:40.030 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.030 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.030 [2024-11-19 12:02:43.186879] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:40.030 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.030 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:40.030 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:40.030 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.030 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:40.030 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.030 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.030 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.030 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:40.030 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:40.030 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:40.030 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.030 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.030 [2024-11-19 12:02:43.339939] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:40.291 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.291 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:40.291 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:40.291 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.291 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:40.291 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.291 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.291 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.291 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:40.291 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:40.291 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:40.291 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.291 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.291 [2024-11-19 12:02:43.488990] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:40.291 [2024-11-19 12:02:43.489176] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:40.291 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.291 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:40.291 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:40.291 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.291 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:40.291 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.291 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.291 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.291 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:40.291 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:40.291 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:40.291 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:40.291 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:40.291 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:40.291 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.291 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.552 BaseBdev2 00:10:40.552 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.552 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:40.552 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:40.552 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:40.552 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:40.552 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:40.552 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:40.552 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:40.552 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.552 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.552 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.552 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:40.552 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.552 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.552 [ 00:10:40.552 { 00:10:40.552 "name": "BaseBdev2", 00:10:40.552 "aliases": [ 00:10:40.552 "c7a188d0-4853-471b-a633-2260a629016e" 00:10:40.552 ], 00:10:40.552 "product_name": "Malloc disk", 00:10:40.552 "block_size": 512, 00:10:40.552 "num_blocks": 65536, 00:10:40.552 "uuid": "c7a188d0-4853-471b-a633-2260a629016e", 00:10:40.552 "assigned_rate_limits": { 00:10:40.552 "rw_ios_per_sec": 0, 00:10:40.552 "rw_mbytes_per_sec": 0, 00:10:40.552 "r_mbytes_per_sec": 0, 00:10:40.552 "w_mbytes_per_sec": 0 00:10:40.552 }, 00:10:40.552 "claimed": false, 00:10:40.552 "zoned": false, 00:10:40.552 "supported_io_types": { 00:10:40.552 "read": true, 00:10:40.552 "write": true, 00:10:40.552 "unmap": true, 00:10:40.552 "flush": true, 00:10:40.552 "reset": true, 00:10:40.552 "nvme_admin": false, 00:10:40.552 "nvme_io": false, 00:10:40.552 "nvme_io_md": false, 00:10:40.552 "write_zeroes": true, 00:10:40.552 "zcopy": true, 00:10:40.552 "get_zone_info": false, 00:10:40.552 "zone_management": false, 00:10:40.552 "zone_append": false, 00:10:40.552 "compare": false, 00:10:40.552 "compare_and_write": false, 00:10:40.552 "abort": true, 00:10:40.552 "seek_hole": false, 00:10:40.552 "seek_data": false, 00:10:40.552 "copy": true, 00:10:40.552 "nvme_iov_md": false 00:10:40.552 }, 00:10:40.552 "memory_domains": [ 00:10:40.552 { 00:10:40.552 "dma_device_id": "system", 00:10:40.552 "dma_device_type": 1 00:10:40.552 }, 00:10:40.552 { 00:10:40.552 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.552 "dma_device_type": 2 00:10:40.552 } 00:10:40.552 ], 00:10:40.552 "driver_specific": {} 00:10:40.552 } 00:10:40.552 ] 00:10:40.552 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.552 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:40.552 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:40.552 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:40.552 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:40.552 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.552 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.552 BaseBdev3 00:10:40.552 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.552 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:40.552 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:40.552 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:40.552 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:40.552 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:40.552 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:40.552 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:40.552 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.552 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.552 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.552 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:40.552 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.552 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.552 [ 00:10:40.552 { 00:10:40.552 "name": "BaseBdev3", 00:10:40.552 "aliases": [ 00:10:40.552 "28081ff6-6329-40df-81dd-c4f3cee923fe" 00:10:40.552 ], 00:10:40.552 "product_name": "Malloc disk", 00:10:40.552 "block_size": 512, 00:10:40.552 "num_blocks": 65536, 00:10:40.552 "uuid": "28081ff6-6329-40df-81dd-c4f3cee923fe", 00:10:40.552 "assigned_rate_limits": { 00:10:40.552 "rw_ios_per_sec": 0, 00:10:40.552 "rw_mbytes_per_sec": 0, 00:10:40.552 "r_mbytes_per_sec": 0, 00:10:40.552 "w_mbytes_per_sec": 0 00:10:40.552 }, 00:10:40.552 "claimed": false, 00:10:40.552 "zoned": false, 00:10:40.552 "supported_io_types": { 00:10:40.552 "read": true, 00:10:40.552 "write": true, 00:10:40.552 "unmap": true, 00:10:40.552 "flush": true, 00:10:40.552 "reset": true, 00:10:40.552 "nvme_admin": false, 00:10:40.552 "nvme_io": false, 00:10:40.552 "nvme_io_md": false, 00:10:40.552 "write_zeroes": true, 00:10:40.552 "zcopy": true, 00:10:40.552 "get_zone_info": false, 00:10:40.552 "zone_management": false, 00:10:40.552 "zone_append": false, 00:10:40.552 "compare": false, 00:10:40.552 "compare_and_write": false, 00:10:40.552 "abort": true, 00:10:40.552 "seek_hole": false, 00:10:40.552 "seek_data": false, 00:10:40.552 "copy": true, 00:10:40.552 "nvme_iov_md": false 00:10:40.552 }, 00:10:40.552 "memory_domains": [ 00:10:40.552 { 00:10:40.552 "dma_device_id": "system", 00:10:40.552 "dma_device_type": 1 00:10:40.552 }, 00:10:40.552 { 00:10:40.552 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.552 "dma_device_type": 2 00:10:40.552 } 00:10:40.552 ], 00:10:40.552 "driver_specific": {} 00:10:40.552 } 00:10:40.552 ] 00:10:40.552 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.552 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:40.552 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:40.552 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:40.552 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:40.552 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.552 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.552 BaseBdev4 00:10:40.552 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.552 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:40.552 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:40.552 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:40.552 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:40.552 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:40.552 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:40.552 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:40.552 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.552 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.552 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.552 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:40.552 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.552 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.552 [ 00:10:40.552 { 00:10:40.552 "name": "BaseBdev4", 00:10:40.552 "aliases": [ 00:10:40.552 "5c532e15-560b-4999-a45b-26d603afa962" 00:10:40.552 ], 00:10:40.552 "product_name": "Malloc disk", 00:10:40.552 "block_size": 512, 00:10:40.552 "num_blocks": 65536, 00:10:40.552 "uuid": "5c532e15-560b-4999-a45b-26d603afa962", 00:10:40.552 "assigned_rate_limits": { 00:10:40.553 "rw_ios_per_sec": 0, 00:10:40.553 "rw_mbytes_per_sec": 0, 00:10:40.553 "r_mbytes_per_sec": 0, 00:10:40.553 "w_mbytes_per_sec": 0 00:10:40.553 }, 00:10:40.553 "claimed": false, 00:10:40.553 "zoned": false, 00:10:40.553 "supported_io_types": { 00:10:40.553 "read": true, 00:10:40.553 "write": true, 00:10:40.553 "unmap": true, 00:10:40.553 "flush": true, 00:10:40.553 "reset": true, 00:10:40.553 "nvme_admin": false, 00:10:40.553 "nvme_io": false, 00:10:40.553 "nvme_io_md": false, 00:10:40.553 "write_zeroes": true, 00:10:40.553 "zcopy": true, 00:10:40.553 "get_zone_info": false, 00:10:40.553 "zone_management": false, 00:10:40.553 "zone_append": false, 00:10:40.553 "compare": false, 00:10:40.553 "compare_and_write": false, 00:10:40.553 "abort": true, 00:10:40.553 "seek_hole": false, 00:10:40.553 "seek_data": false, 00:10:40.553 "copy": true, 00:10:40.553 "nvme_iov_md": false 00:10:40.553 }, 00:10:40.553 "memory_domains": [ 00:10:40.553 { 00:10:40.553 "dma_device_id": "system", 00:10:40.553 "dma_device_type": 1 00:10:40.553 }, 00:10:40.553 { 00:10:40.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.553 "dma_device_type": 2 00:10:40.553 } 00:10:40.553 ], 00:10:40.553 "driver_specific": {} 00:10:40.553 } 00:10:40.553 ] 00:10:40.553 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.553 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:40.553 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:40.553 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:40.553 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:40.553 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.553 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.553 [2024-11-19 12:02:43.876626] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:40.553 [2024-11-19 12:02:43.876791] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:40.553 [2024-11-19 12:02:43.876840] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:40.553 [2024-11-19 12:02:43.878731] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:40.553 [2024-11-19 12:02:43.878848] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:40.553 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.553 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:40.553 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.553 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:40.553 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:40.553 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.553 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:40.553 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.553 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.553 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.553 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.553 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.553 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.553 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.553 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.553 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.813 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.813 "name": "Existed_Raid", 00:10:40.813 "uuid": "1cfb0bde-239b-42e1-af0b-1a2650ff916f", 00:10:40.813 "strip_size_kb": 64, 00:10:40.813 "state": "configuring", 00:10:40.814 "raid_level": "raid0", 00:10:40.814 "superblock": true, 00:10:40.814 "num_base_bdevs": 4, 00:10:40.814 "num_base_bdevs_discovered": 3, 00:10:40.814 "num_base_bdevs_operational": 4, 00:10:40.814 "base_bdevs_list": [ 00:10:40.814 { 00:10:40.814 "name": "BaseBdev1", 00:10:40.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.814 "is_configured": false, 00:10:40.814 "data_offset": 0, 00:10:40.814 "data_size": 0 00:10:40.814 }, 00:10:40.814 { 00:10:40.814 "name": "BaseBdev2", 00:10:40.814 "uuid": "c7a188d0-4853-471b-a633-2260a629016e", 00:10:40.814 "is_configured": true, 00:10:40.814 "data_offset": 2048, 00:10:40.814 "data_size": 63488 00:10:40.814 }, 00:10:40.814 { 00:10:40.814 "name": "BaseBdev3", 00:10:40.814 "uuid": "28081ff6-6329-40df-81dd-c4f3cee923fe", 00:10:40.814 "is_configured": true, 00:10:40.814 "data_offset": 2048, 00:10:40.814 "data_size": 63488 00:10:40.814 }, 00:10:40.814 { 00:10:40.814 "name": "BaseBdev4", 00:10:40.814 "uuid": "5c532e15-560b-4999-a45b-26d603afa962", 00:10:40.814 "is_configured": true, 00:10:40.814 "data_offset": 2048, 00:10:40.814 "data_size": 63488 00:10:40.814 } 00:10:40.814 ] 00:10:40.814 }' 00:10:40.814 12:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.814 12:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.073 12:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:41.073 12:02:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.073 12:02:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.073 [2024-11-19 12:02:44.347797] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:41.073 12:02:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.073 12:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:41.073 12:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.073 12:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:41.073 12:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:41.073 12:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:41.073 12:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:41.073 12:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.073 12:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.073 12:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.073 12:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.073 12:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.073 12:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.073 12:02:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.073 12:02:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.073 12:02:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.073 12:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.073 "name": "Existed_Raid", 00:10:41.073 "uuid": "1cfb0bde-239b-42e1-af0b-1a2650ff916f", 00:10:41.073 "strip_size_kb": 64, 00:10:41.073 "state": "configuring", 00:10:41.073 "raid_level": "raid0", 00:10:41.073 "superblock": true, 00:10:41.073 "num_base_bdevs": 4, 00:10:41.073 "num_base_bdevs_discovered": 2, 00:10:41.073 "num_base_bdevs_operational": 4, 00:10:41.073 "base_bdevs_list": [ 00:10:41.073 { 00:10:41.073 "name": "BaseBdev1", 00:10:41.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.073 "is_configured": false, 00:10:41.073 "data_offset": 0, 00:10:41.073 "data_size": 0 00:10:41.073 }, 00:10:41.073 { 00:10:41.073 "name": null, 00:10:41.073 "uuid": "c7a188d0-4853-471b-a633-2260a629016e", 00:10:41.073 "is_configured": false, 00:10:41.073 "data_offset": 0, 00:10:41.073 "data_size": 63488 00:10:41.073 }, 00:10:41.073 { 00:10:41.073 "name": "BaseBdev3", 00:10:41.073 "uuid": "28081ff6-6329-40df-81dd-c4f3cee923fe", 00:10:41.073 "is_configured": true, 00:10:41.073 "data_offset": 2048, 00:10:41.073 "data_size": 63488 00:10:41.073 }, 00:10:41.073 { 00:10:41.073 "name": "BaseBdev4", 00:10:41.073 "uuid": "5c532e15-560b-4999-a45b-26d603afa962", 00:10:41.073 "is_configured": true, 00:10:41.073 "data_offset": 2048, 00:10:41.073 "data_size": 63488 00:10:41.073 } 00:10:41.073 ] 00:10:41.073 }' 00:10:41.073 12:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.073 12:02:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.643 12:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.643 12:02:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.643 12:02:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.643 12:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:41.643 12:02:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.643 12:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:41.643 12:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:41.643 12:02:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.643 12:02:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.643 [2024-11-19 12:02:44.855086] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:41.643 BaseBdev1 00:10:41.643 12:02:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.643 12:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:41.643 12:02:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:41.643 12:02:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:41.643 12:02:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:41.643 12:02:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:41.643 12:02:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:41.643 12:02:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:41.643 12:02:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.643 12:02:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.643 12:02:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.643 12:02:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:41.643 12:02:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.643 12:02:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.643 [ 00:10:41.643 { 00:10:41.643 "name": "BaseBdev1", 00:10:41.643 "aliases": [ 00:10:41.643 "d6f36b28-1efc-4da8-8c78-84cb2596ac7b" 00:10:41.643 ], 00:10:41.643 "product_name": "Malloc disk", 00:10:41.643 "block_size": 512, 00:10:41.643 "num_blocks": 65536, 00:10:41.643 "uuid": "d6f36b28-1efc-4da8-8c78-84cb2596ac7b", 00:10:41.643 "assigned_rate_limits": { 00:10:41.643 "rw_ios_per_sec": 0, 00:10:41.643 "rw_mbytes_per_sec": 0, 00:10:41.643 "r_mbytes_per_sec": 0, 00:10:41.643 "w_mbytes_per_sec": 0 00:10:41.643 }, 00:10:41.643 "claimed": true, 00:10:41.643 "claim_type": "exclusive_write", 00:10:41.643 "zoned": false, 00:10:41.643 "supported_io_types": { 00:10:41.643 "read": true, 00:10:41.643 "write": true, 00:10:41.643 "unmap": true, 00:10:41.643 "flush": true, 00:10:41.643 "reset": true, 00:10:41.643 "nvme_admin": false, 00:10:41.643 "nvme_io": false, 00:10:41.643 "nvme_io_md": false, 00:10:41.643 "write_zeroes": true, 00:10:41.643 "zcopy": true, 00:10:41.643 "get_zone_info": false, 00:10:41.643 "zone_management": false, 00:10:41.643 "zone_append": false, 00:10:41.643 "compare": false, 00:10:41.643 "compare_and_write": false, 00:10:41.643 "abort": true, 00:10:41.643 "seek_hole": false, 00:10:41.643 "seek_data": false, 00:10:41.643 "copy": true, 00:10:41.643 "nvme_iov_md": false 00:10:41.643 }, 00:10:41.643 "memory_domains": [ 00:10:41.643 { 00:10:41.643 "dma_device_id": "system", 00:10:41.643 "dma_device_type": 1 00:10:41.643 }, 00:10:41.643 { 00:10:41.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.643 "dma_device_type": 2 00:10:41.643 } 00:10:41.643 ], 00:10:41.643 "driver_specific": {} 00:10:41.643 } 00:10:41.643 ] 00:10:41.643 12:02:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.643 12:02:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:41.643 12:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:41.643 12:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.643 12:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:41.643 12:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:41.643 12:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:41.643 12:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:41.643 12:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.643 12:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.643 12:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.643 12:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.643 12:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.643 12:02:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.643 12:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.643 12:02:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.643 12:02:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.643 12:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.643 "name": "Existed_Raid", 00:10:41.643 "uuid": "1cfb0bde-239b-42e1-af0b-1a2650ff916f", 00:10:41.643 "strip_size_kb": 64, 00:10:41.643 "state": "configuring", 00:10:41.643 "raid_level": "raid0", 00:10:41.643 "superblock": true, 00:10:41.643 "num_base_bdevs": 4, 00:10:41.643 "num_base_bdevs_discovered": 3, 00:10:41.643 "num_base_bdevs_operational": 4, 00:10:41.643 "base_bdevs_list": [ 00:10:41.643 { 00:10:41.643 "name": "BaseBdev1", 00:10:41.643 "uuid": "d6f36b28-1efc-4da8-8c78-84cb2596ac7b", 00:10:41.643 "is_configured": true, 00:10:41.643 "data_offset": 2048, 00:10:41.643 "data_size": 63488 00:10:41.643 }, 00:10:41.643 { 00:10:41.643 "name": null, 00:10:41.643 "uuid": "c7a188d0-4853-471b-a633-2260a629016e", 00:10:41.643 "is_configured": false, 00:10:41.643 "data_offset": 0, 00:10:41.643 "data_size": 63488 00:10:41.643 }, 00:10:41.643 { 00:10:41.643 "name": "BaseBdev3", 00:10:41.643 "uuid": "28081ff6-6329-40df-81dd-c4f3cee923fe", 00:10:41.643 "is_configured": true, 00:10:41.643 "data_offset": 2048, 00:10:41.643 "data_size": 63488 00:10:41.643 }, 00:10:41.643 { 00:10:41.643 "name": "BaseBdev4", 00:10:41.643 "uuid": "5c532e15-560b-4999-a45b-26d603afa962", 00:10:41.643 "is_configured": true, 00:10:41.643 "data_offset": 2048, 00:10:41.643 "data_size": 63488 00:10:41.643 } 00:10:41.643 ] 00:10:41.643 }' 00:10:41.643 12:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.644 12:02:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.215 12:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.215 12:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.215 12:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.215 12:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:42.215 12:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.215 12:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:42.215 12:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:42.215 12:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.215 12:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.215 [2024-11-19 12:02:45.382240] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:42.215 12:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.215 12:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:42.215 12:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.215 12:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:42.215 12:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:42.215 12:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:42.215 12:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:42.215 12:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.215 12:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.215 12:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.215 12:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.215 12:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.215 12:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.215 12:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.215 12:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.215 12:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.215 12:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.215 "name": "Existed_Raid", 00:10:42.215 "uuid": "1cfb0bde-239b-42e1-af0b-1a2650ff916f", 00:10:42.215 "strip_size_kb": 64, 00:10:42.215 "state": "configuring", 00:10:42.215 "raid_level": "raid0", 00:10:42.215 "superblock": true, 00:10:42.215 "num_base_bdevs": 4, 00:10:42.215 "num_base_bdevs_discovered": 2, 00:10:42.215 "num_base_bdevs_operational": 4, 00:10:42.215 "base_bdevs_list": [ 00:10:42.215 { 00:10:42.215 "name": "BaseBdev1", 00:10:42.215 "uuid": "d6f36b28-1efc-4da8-8c78-84cb2596ac7b", 00:10:42.215 "is_configured": true, 00:10:42.215 "data_offset": 2048, 00:10:42.215 "data_size": 63488 00:10:42.215 }, 00:10:42.215 { 00:10:42.215 "name": null, 00:10:42.215 "uuid": "c7a188d0-4853-471b-a633-2260a629016e", 00:10:42.215 "is_configured": false, 00:10:42.215 "data_offset": 0, 00:10:42.215 "data_size": 63488 00:10:42.215 }, 00:10:42.215 { 00:10:42.215 "name": null, 00:10:42.215 "uuid": "28081ff6-6329-40df-81dd-c4f3cee923fe", 00:10:42.215 "is_configured": false, 00:10:42.215 "data_offset": 0, 00:10:42.215 "data_size": 63488 00:10:42.215 }, 00:10:42.215 { 00:10:42.215 "name": "BaseBdev4", 00:10:42.215 "uuid": "5c532e15-560b-4999-a45b-26d603afa962", 00:10:42.215 "is_configured": true, 00:10:42.215 "data_offset": 2048, 00:10:42.215 "data_size": 63488 00:10:42.215 } 00:10:42.215 ] 00:10:42.215 }' 00:10:42.215 12:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.215 12:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.787 12:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.787 12:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.787 12:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.787 12:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:42.787 12:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.787 12:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:42.787 12:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:42.787 12:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.787 12:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.787 [2024-11-19 12:02:45.913324] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:42.787 12:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.787 12:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:42.787 12:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.787 12:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:42.787 12:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:42.787 12:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:42.787 12:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:42.787 12:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.787 12:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.787 12:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.787 12:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.787 12:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.787 12:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.787 12:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.787 12:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.787 12:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.787 12:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.787 "name": "Existed_Raid", 00:10:42.787 "uuid": "1cfb0bde-239b-42e1-af0b-1a2650ff916f", 00:10:42.787 "strip_size_kb": 64, 00:10:42.787 "state": "configuring", 00:10:42.788 "raid_level": "raid0", 00:10:42.788 "superblock": true, 00:10:42.788 "num_base_bdevs": 4, 00:10:42.788 "num_base_bdevs_discovered": 3, 00:10:42.788 "num_base_bdevs_operational": 4, 00:10:42.788 "base_bdevs_list": [ 00:10:42.788 { 00:10:42.788 "name": "BaseBdev1", 00:10:42.788 "uuid": "d6f36b28-1efc-4da8-8c78-84cb2596ac7b", 00:10:42.788 "is_configured": true, 00:10:42.788 "data_offset": 2048, 00:10:42.788 "data_size": 63488 00:10:42.788 }, 00:10:42.788 { 00:10:42.788 "name": null, 00:10:42.788 "uuid": "c7a188d0-4853-471b-a633-2260a629016e", 00:10:42.788 "is_configured": false, 00:10:42.788 "data_offset": 0, 00:10:42.788 "data_size": 63488 00:10:42.788 }, 00:10:42.788 { 00:10:42.788 "name": "BaseBdev3", 00:10:42.788 "uuid": "28081ff6-6329-40df-81dd-c4f3cee923fe", 00:10:42.788 "is_configured": true, 00:10:42.788 "data_offset": 2048, 00:10:42.788 "data_size": 63488 00:10:42.788 }, 00:10:42.788 { 00:10:42.788 "name": "BaseBdev4", 00:10:42.788 "uuid": "5c532e15-560b-4999-a45b-26d603afa962", 00:10:42.788 "is_configured": true, 00:10:42.788 "data_offset": 2048, 00:10:42.788 "data_size": 63488 00:10:42.788 } 00:10:42.788 ] 00:10:42.788 }' 00:10:42.788 12:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.788 12:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.047 12:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:43.047 12:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.047 12:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.047 12:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.047 12:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.047 12:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:43.047 12:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:43.047 12:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.047 12:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.047 [2024-11-19 12:02:46.400517] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:43.306 12:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.306 12:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:43.306 12:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.306 12:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:43.306 12:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:43.306 12:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:43.306 12:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:43.306 12:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.306 12:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.306 12:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.306 12:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.306 12:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.306 12:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.306 12:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.306 12:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.306 12:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.306 12:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.306 "name": "Existed_Raid", 00:10:43.306 "uuid": "1cfb0bde-239b-42e1-af0b-1a2650ff916f", 00:10:43.306 "strip_size_kb": 64, 00:10:43.306 "state": "configuring", 00:10:43.306 "raid_level": "raid0", 00:10:43.306 "superblock": true, 00:10:43.306 "num_base_bdevs": 4, 00:10:43.306 "num_base_bdevs_discovered": 2, 00:10:43.306 "num_base_bdevs_operational": 4, 00:10:43.306 "base_bdevs_list": [ 00:10:43.306 { 00:10:43.306 "name": null, 00:10:43.306 "uuid": "d6f36b28-1efc-4da8-8c78-84cb2596ac7b", 00:10:43.306 "is_configured": false, 00:10:43.306 "data_offset": 0, 00:10:43.306 "data_size": 63488 00:10:43.306 }, 00:10:43.306 { 00:10:43.306 "name": null, 00:10:43.306 "uuid": "c7a188d0-4853-471b-a633-2260a629016e", 00:10:43.306 "is_configured": false, 00:10:43.306 "data_offset": 0, 00:10:43.306 "data_size": 63488 00:10:43.306 }, 00:10:43.306 { 00:10:43.306 "name": "BaseBdev3", 00:10:43.306 "uuid": "28081ff6-6329-40df-81dd-c4f3cee923fe", 00:10:43.306 "is_configured": true, 00:10:43.306 "data_offset": 2048, 00:10:43.306 "data_size": 63488 00:10:43.306 }, 00:10:43.306 { 00:10:43.306 "name": "BaseBdev4", 00:10:43.306 "uuid": "5c532e15-560b-4999-a45b-26d603afa962", 00:10:43.306 "is_configured": true, 00:10:43.306 "data_offset": 2048, 00:10:43.306 "data_size": 63488 00:10:43.306 } 00:10:43.306 ] 00:10:43.306 }' 00:10:43.306 12:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.306 12:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.876 12:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:43.876 12:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.876 12:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.876 12:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.876 12:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.876 12:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:43.876 12:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:43.876 12:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.876 12:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.876 [2024-11-19 12:02:46.989829] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:43.876 12:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.876 12:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:43.876 12:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.876 12:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:43.876 12:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:43.876 12:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:43.876 12:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:43.876 12:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.876 12:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.876 12:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.876 12:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.876 12:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.876 12:02:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.876 12:02:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.876 12:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.876 12:02:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.876 12:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.876 "name": "Existed_Raid", 00:10:43.876 "uuid": "1cfb0bde-239b-42e1-af0b-1a2650ff916f", 00:10:43.876 "strip_size_kb": 64, 00:10:43.876 "state": "configuring", 00:10:43.876 "raid_level": "raid0", 00:10:43.876 "superblock": true, 00:10:43.876 "num_base_bdevs": 4, 00:10:43.876 "num_base_bdevs_discovered": 3, 00:10:43.876 "num_base_bdevs_operational": 4, 00:10:43.876 "base_bdevs_list": [ 00:10:43.876 { 00:10:43.876 "name": null, 00:10:43.876 "uuid": "d6f36b28-1efc-4da8-8c78-84cb2596ac7b", 00:10:43.876 "is_configured": false, 00:10:43.876 "data_offset": 0, 00:10:43.876 "data_size": 63488 00:10:43.876 }, 00:10:43.876 { 00:10:43.876 "name": "BaseBdev2", 00:10:43.876 "uuid": "c7a188d0-4853-471b-a633-2260a629016e", 00:10:43.876 "is_configured": true, 00:10:43.876 "data_offset": 2048, 00:10:43.876 "data_size": 63488 00:10:43.876 }, 00:10:43.876 { 00:10:43.876 "name": "BaseBdev3", 00:10:43.876 "uuid": "28081ff6-6329-40df-81dd-c4f3cee923fe", 00:10:43.876 "is_configured": true, 00:10:43.876 "data_offset": 2048, 00:10:43.876 "data_size": 63488 00:10:43.876 }, 00:10:43.876 { 00:10:43.876 "name": "BaseBdev4", 00:10:43.876 "uuid": "5c532e15-560b-4999-a45b-26d603afa962", 00:10:43.876 "is_configured": true, 00:10:43.876 "data_offset": 2048, 00:10:43.876 "data_size": 63488 00:10:43.876 } 00:10:43.876 ] 00:10:43.876 }' 00:10:43.876 12:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.876 12:02:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.136 12:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:44.136 12:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.136 12:02:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.136 12:02:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.136 12:02:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.136 12:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:44.136 12:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:44.136 12:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.136 12:02:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.136 12:02:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.397 12:02:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.397 12:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d6f36b28-1efc-4da8-8c78-84cb2596ac7b 00:10:44.397 12:02:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.397 12:02:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.397 [2024-11-19 12:02:47.564265] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:44.397 [2024-11-19 12:02:47.564615] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:44.397 NewBaseBdev 00:10:44.397 [2024-11-19 12:02:47.564670] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:44.397 [2024-11-19 12:02:47.564930] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:44.397 [2024-11-19 12:02:47.565087] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:44.397 [2024-11-19 12:02:47.565101] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:44.397 [2024-11-19 12:02:47.565245] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:44.397 12:02:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.397 12:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:44.397 12:02:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:44.397 12:02:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:44.397 12:02:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:44.397 12:02:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:44.397 12:02:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:44.397 12:02:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:44.397 12:02:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.397 12:02:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.397 12:02:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.397 12:02:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:44.397 12:02:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.397 12:02:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.397 [ 00:10:44.397 { 00:10:44.397 "name": "NewBaseBdev", 00:10:44.397 "aliases": [ 00:10:44.397 "d6f36b28-1efc-4da8-8c78-84cb2596ac7b" 00:10:44.397 ], 00:10:44.397 "product_name": "Malloc disk", 00:10:44.397 "block_size": 512, 00:10:44.397 "num_blocks": 65536, 00:10:44.397 "uuid": "d6f36b28-1efc-4da8-8c78-84cb2596ac7b", 00:10:44.397 "assigned_rate_limits": { 00:10:44.397 "rw_ios_per_sec": 0, 00:10:44.397 "rw_mbytes_per_sec": 0, 00:10:44.397 "r_mbytes_per_sec": 0, 00:10:44.397 "w_mbytes_per_sec": 0 00:10:44.397 }, 00:10:44.397 "claimed": true, 00:10:44.397 "claim_type": "exclusive_write", 00:10:44.397 "zoned": false, 00:10:44.397 "supported_io_types": { 00:10:44.397 "read": true, 00:10:44.397 "write": true, 00:10:44.397 "unmap": true, 00:10:44.397 "flush": true, 00:10:44.397 "reset": true, 00:10:44.397 "nvme_admin": false, 00:10:44.397 "nvme_io": false, 00:10:44.397 "nvme_io_md": false, 00:10:44.397 "write_zeroes": true, 00:10:44.397 "zcopy": true, 00:10:44.397 "get_zone_info": false, 00:10:44.397 "zone_management": false, 00:10:44.397 "zone_append": false, 00:10:44.397 "compare": false, 00:10:44.397 "compare_and_write": false, 00:10:44.397 "abort": true, 00:10:44.397 "seek_hole": false, 00:10:44.397 "seek_data": false, 00:10:44.397 "copy": true, 00:10:44.397 "nvme_iov_md": false 00:10:44.397 }, 00:10:44.397 "memory_domains": [ 00:10:44.397 { 00:10:44.397 "dma_device_id": "system", 00:10:44.397 "dma_device_type": 1 00:10:44.397 }, 00:10:44.397 { 00:10:44.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.397 "dma_device_type": 2 00:10:44.397 } 00:10:44.397 ], 00:10:44.397 "driver_specific": {} 00:10:44.397 } 00:10:44.397 ] 00:10:44.397 12:02:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.397 12:02:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:44.397 12:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:44.397 12:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.397 12:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:44.397 12:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:44.397 12:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.397 12:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:44.397 12:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.397 12:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.397 12:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.397 12:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.397 12:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.397 12:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.397 12:02:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.397 12:02:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.397 12:02:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.397 12:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.397 "name": "Existed_Raid", 00:10:44.397 "uuid": "1cfb0bde-239b-42e1-af0b-1a2650ff916f", 00:10:44.397 "strip_size_kb": 64, 00:10:44.397 "state": "online", 00:10:44.397 "raid_level": "raid0", 00:10:44.397 "superblock": true, 00:10:44.397 "num_base_bdevs": 4, 00:10:44.397 "num_base_bdevs_discovered": 4, 00:10:44.397 "num_base_bdevs_operational": 4, 00:10:44.397 "base_bdevs_list": [ 00:10:44.397 { 00:10:44.397 "name": "NewBaseBdev", 00:10:44.397 "uuid": "d6f36b28-1efc-4da8-8c78-84cb2596ac7b", 00:10:44.397 "is_configured": true, 00:10:44.397 "data_offset": 2048, 00:10:44.397 "data_size": 63488 00:10:44.397 }, 00:10:44.397 { 00:10:44.397 "name": "BaseBdev2", 00:10:44.397 "uuid": "c7a188d0-4853-471b-a633-2260a629016e", 00:10:44.397 "is_configured": true, 00:10:44.397 "data_offset": 2048, 00:10:44.397 "data_size": 63488 00:10:44.397 }, 00:10:44.397 { 00:10:44.397 "name": "BaseBdev3", 00:10:44.397 "uuid": "28081ff6-6329-40df-81dd-c4f3cee923fe", 00:10:44.397 "is_configured": true, 00:10:44.397 "data_offset": 2048, 00:10:44.397 "data_size": 63488 00:10:44.397 }, 00:10:44.397 { 00:10:44.397 "name": "BaseBdev4", 00:10:44.398 "uuid": "5c532e15-560b-4999-a45b-26d603afa962", 00:10:44.398 "is_configured": true, 00:10:44.398 "data_offset": 2048, 00:10:44.398 "data_size": 63488 00:10:44.398 } 00:10:44.398 ] 00:10:44.398 }' 00:10:44.398 12:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.398 12:02:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.969 12:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:44.969 12:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:44.969 12:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:44.969 12:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:44.969 12:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:44.969 12:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:44.969 12:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:44.969 12:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:44.969 12:02:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.969 12:02:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.969 [2024-11-19 12:02:48.051856] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:44.969 12:02:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.969 12:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:44.969 "name": "Existed_Raid", 00:10:44.969 "aliases": [ 00:10:44.969 "1cfb0bde-239b-42e1-af0b-1a2650ff916f" 00:10:44.969 ], 00:10:44.969 "product_name": "Raid Volume", 00:10:44.969 "block_size": 512, 00:10:44.969 "num_blocks": 253952, 00:10:44.969 "uuid": "1cfb0bde-239b-42e1-af0b-1a2650ff916f", 00:10:44.969 "assigned_rate_limits": { 00:10:44.969 "rw_ios_per_sec": 0, 00:10:44.969 "rw_mbytes_per_sec": 0, 00:10:44.969 "r_mbytes_per_sec": 0, 00:10:44.969 "w_mbytes_per_sec": 0 00:10:44.969 }, 00:10:44.969 "claimed": false, 00:10:44.969 "zoned": false, 00:10:44.969 "supported_io_types": { 00:10:44.969 "read": true, 00:10:44.969 "write": true, 00:10:44.969 "unmap": true, 00:10:44.969 "flush": true, 00:10:44.969 "reset": true, 00:10:44.969 "nvme_admin": false, 00:10:44.969 "nvme_io": false, 00:10:44.969 "nvme_io_md": false, 00:10:44.969 "write_zeroes": true, 00:10:44.969 "zcopy": false, 00:10:44.969 "get_zone_info": false, 00:10:44.969 "zone_management": false, 00:10:44.969 "zone_append": false, 00:10:44.969 "compare": false, 00:10:44.969 "compare_and_write": false, 00:10:44.969 "abort": false, 00:10:44.969 "seek_hole": false, 00:10:44.969 "seek_data": false, 00:10:44.969 "copy": false, 00:10:44.969 "nvme_iov_md": false 00:10:44.969 }, 00:10:44.969 "memory_domains": [ 00:10:44.969 { 00:10:44.970 "dma_device_id": "system", 00:10:44.970 "dma_device_type": 1 00:10:44.970 }, 00:10:44.970 { 00:10:44.970 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.970 "dma_device_type": 2 00:10:44.970 }, 00:10:44.970 { 00:10:44.970 "dma_device_id": "system", 00:10:44.970 "dma_device_type": 1 00:10:44.970 }, 00:10:44.970 { 00:10:44.970 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.970 "dma_device_type": 2 00:10:44.970 }, 00:10:44.970 { 00:10:44.970 "dma_device_id": "system", 00:10:44.970 "dma_device_type": 1 00:10:44.970 }, 00:10:44.970 { 00:10:44.970 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.970 "dma_device_type": 2 00:10:44.970 }, 00:10:44.970 { 00:10:44.970 "dma_device_id": "system", 00:10:44.970 "dma_device_type": 1 00:10:44.970 }, 00:10:44.970 { 00:10:44.970 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.970 "dma_device_type": 2 00:10:44.970 } 00:10:44.970 ], 00:10:44.970 "driver_specific": { 00:10:44.970 "raid": { 00:10:44.970 "uuid": "1cfb0bde-239b-42e1-af0b-1a2650ff916f", 00:10:44.970 "strip_size_kb": 64, 00:10:44.970 "state": "online", 00:10:44.970 "raid_level": "raid0", 00:10:44.970 "superblock": true, 00:10:44.970 "num_base_bdevs": 4, 00:10:44.970 "num_base_bdevs_discovered": 4, 00:10:44.970 "num_base_bdevs_operational": 4, 00:10:44.970 "base_bdevs_list": [ 00:10:44.970 { 00:10:44.970 "name": "NewBaseBdev", 00:10:44.970 "uuid": "d6f36b28-1efc-4da8-8c78-84cb2596ac7b", 00:10:44.970 "is_configured": true, 00:10:44.970 "data_offset": 2048, 00:10:44.970 "data_size": 63488 00:10:44.970 }, 00:10:44.970 { 00:10:44.970 "name": "BaseBdev2", 00:10:44.970 "uuid": "c7a188d0-4853-471b-a633-2260a629016e", 00:10:44.970 "is_configured": true, 00:10:44.970 "data_offset": 2048, 00:10:44.970 "data_size": 63488 00:10:44.970 }, 00:10:44.970 { 00:10:44.970 "name": "BaseBdev3", 00:10:44.970 "uuid": "28081ff6-6329-40df-81dd-c4f3cee923fe", 00:10:44.970 "is_configured": true, 00:10:44.970 "data_offset": 2048, 00:10:44.970 "data_size": 63488 00:10:44.970 }, 00:10:44.970 { 00:10:44.970 "name": "BaseBdev4", 00:10:44.970 "uuid": "5c532e15-560b-4999-a45b-26d603afa962", 00:10:44.970 "is_configured": true, 00:10:44.970 "data_offset": 2048, 00:10:44.970 "data_size": 63488 00:10:44.970 } 00:10:44.970 ] 00:10:44.970 } 00:10:44.970 } 00:10:44.970 }' 00:10:44.970 12:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:44.970 12:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:44.970 BaseBdev2 00:10:44.970 BaseBdev3 00:10:44.970 BaseBdev4' 00:10:44.970 12:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.970 12:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:44.970 12:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:44.970 12:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:44.970 12:02:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.970 12:02:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.970 12:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.970 12:02:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.970 12:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:44.970 12:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:44.970 12:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:44.970 12:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:44.970 12:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.970 12:02:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.970 12:02:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.970 12:02:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.970 12:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:44.970 12:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:44.970 12:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:44.970 12:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:44.970 12:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.970 12:02:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.970 12:02:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.970 12:02:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.970 12:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:44.970 12:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:44.970 12:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:44.970 12:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:44.970 12:02:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.970 12:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.970 12:02:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.230 12:02:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.230 12:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:45.230 12:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:45.230 12:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:45.230 12:02:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.230 12:02:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.230 [2024-11-19 12:02:48.374975] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:45.230 [2024-11-19 12:02:48.375101] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:45.230 [2024-11-19 12:02:48.375193] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:45.230 [2024-11-19 12:02:48.375280] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:45.230 [2024-11-19 12:02:48.375367] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:45.230 12:02:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.230 12:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70072 00:10:45.230 12:02:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 70072 ']' 00:10:45.230 12:02:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 70072 00:10:45.230 12:02:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:45.230 12:02:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:45.231 12:02:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70072 00:10:45.231 killing process with pid 70072 00:10:45.231 12:02:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:45.231 12:02:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:45.231 12:02:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70072' 00:10:45.231 12:02:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 70072 00:10:45.231 [2024-11-19 12:02:48.407333] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:45.231 12:02:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 70072 00:10:45.490 [2024-11-19 12:02:48.797800] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:46.871 12:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:46.871 00:10:46.871 real 0m11.453s 00:10:46.871 user 0m18.311s 00:10:46.871 sys 0m1.988s 00:10:46.871 12:02:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:46.871 12:02:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.871 ************************************ 00:10:46.871 END TEST raid_state_function_test_sb 00:10:46.871 ************************************ 00:10:46.871 12:02:49 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:10:46.871 12:02:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:46.871 12:02:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:46.871 12:02:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:46.871 ************************************ 00:10:46.871 START TEST raid_superblock_test 00:10:46.871 ************************************ 00:10:46.871 12:02:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:10:46.871 12:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:10:46.871 12:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:46.871 12:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:46.871 12:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:46.871 12:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:46.871 12:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:46.871 12:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:46.871 12:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:46.871 12:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:46.871 12:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:46.871 12:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:46.871 12:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:46.871 12:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:46.871 12:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:10:46.871 12:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:46.871 12:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:46.871 12:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70741 00:10:46.871 12:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:46.871 12:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70741 00:10:46.871 12:02:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 70741 ']' 00:10:46.871 12:02:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:46.871 12:02:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:46.871 12:02:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:46.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:46.871 12:02:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:46.871 12:02:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.871 [2024-11-19 12:02:50.044241] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:10:46.871 [2024-11-19 12:02:50.044446] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70741 ] 00:10:46.871 [2024-11-19 12:02:50.224907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.130 [2024-11-19 12:02:50.343995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.390 [2024-11-19 12:02:50.545001] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:47.390 [2024-11-19 12:02:50.545065] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:47.652 12:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:47.652 12:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:47.652 12:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:47.652 12:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:47.652 12:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:47.652 12:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:47.652 12:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:47.652 12:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:47.652 12:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:47.652 12:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:47.652 12:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:47.652 12:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.652 12:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.652 malloc1 00:10:47.652 12:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.652 12:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:47.652 12:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.652 12:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.652 [2024-11-19 12:02:50.931052] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:47.652 [2024-11-19 12:02:50.931235] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:47.652 [2024-11-19 12:02:50.931282] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:47.652 [2024-11-19 12:02:50.931337] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:47.652 [2024-11-19 12:02:50.933516] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:47.652 [2024-11-19 12:02:50.933588] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:47.652 pt1 00:10:47.652 12:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.652 12:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:47.652 12:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:47.652 12:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:47.652 12:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:47.652 12:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:47.652 12:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:47.652 12:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:47.652 12:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:47.652 12:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:47.652 12:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.652 12:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.652 malloc2 00:10:47.652 12:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.652 12:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:47.652 12:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.652 12:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.652 [2024-11-19 12:02:50.988757] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:47.652 [2024-11-19 12:02:50.988888] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:47.652 [2024-11-19 12:02:50.988911] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:47.652 [2024-11-19 12:02:50.988920] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:47.652 [2024-11-19 12:02:50.991030] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:47.652 [2024-11-19 12:02:50.991063] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:47.652 pt2 00:10:47.652 12:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.652 12:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:47.652 12:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:47.652 12:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:47.652 12:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:47.652 12:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:47.652 12:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:47.652 12:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:47.652 12:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:47.652 12:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:47.652 12:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.652 12:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.913 malloc3 00:10:47.913 12:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.913 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:47.913 12:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.913 12:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.913 [2024-11-19 12:02:51.059252] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:47.913 [2024-11-19 12:02:51.059379] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:47.913 [2024-11-19 12:02:51.059414] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:47.913 [2024-11-19 12:02:51.059465] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:47.913 [2024-11-19 12:02:51.061530] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:47.913 [2024-11-19 12:02:51.061639] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:47.913 pt3 00:10:47.913 12:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.913 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:47.913 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:47.913 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:47.913 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:47.913 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:47.913 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:47.913 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:47.913 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:47.913 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:47.913 12:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.913 12:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.913 malloc4 00:10:47.913 12:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.913 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:47.913 12:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.913 12:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.913 [2024-11-19 12:02:51.115694] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:47.913 [2024-11-19 12:02:51.115826] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:47.913 [2024-11-19 12:02:51.115877] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:47.913 [2024-11-19 12:02:51.115911] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:47.913 [2024-11-19 12:02:51.117974] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:47.913 [2024-11-19 12:02:51.118071] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:47.913 pt4 00:10:47.913 12:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.913 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:47.913 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:47.913 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:47.913 12:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.913 12:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.913 [2024-11-19 12:02:51.127717] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:47.913 [2024-11-19 12:02:51.129656] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:47.913 [2024-11-19 12:02:51.129762] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:47.913 [2024-11-19 12:02:51.129846] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:47.913 [2024-11-19 12:02:51.130086] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:47.913 [2024-11-19 12:02:51.130138] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:47.913 [2024-11-19 12:02:51.130405] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:47.913 [2024-11-19 12:02:51.130611] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:47.913 [2024-11-19 12:02:51.130658] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:47.913 [2024-11-19 12:02:51.130850] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:47.913 12:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.913 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:47.913 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:47.913 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:47.913 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:47.913 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:47.913 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:47.913 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.913 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.913 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.913 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.913 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.913 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:47.913 12:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.913 12:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.913 12:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.913 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.913 "name": "raid_bdev1", 00:10:47.913 "uuid": "325c2ebf-631f-411f-b6eb-38db0904f34e", 00:10:47.913 "strip_size_kb": 64, 00:10:47.913 "state": "online", 00:10:47.913 "raid_level": "raid0", 00:10:47.913 "superblock": true, 00:10:47.913 "num_base_bdevs": 4, 00:10:47.913 "num_base_bdevs_discovered": 4, 00:10:47.913 "num_base_bdevs_operational": 4, 00:10:47.913 "base_bdevs_list": [ 00:10:47.913 { 00:10:47.913 "name": "pt1", 00:10:47.913 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:47.913 "is_configured": true, 00:10:47.913 "data_offset": 2048, 00:10:47.913 "data_size": 63488 00:10:47.913 }, 00:10:47.913 { 00:10:47.913 "name": "pt2", 00:10:47.913 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:47.913 "is_configured": true, 00:10:47.913 "data_offset": 2048, 00:10:47.913 "data_size": 63488 00:10:47.913 }, 00:10:47.913 { 00:10:47.913 "name": "pt3", 00:10:47.914 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:47.914 "is_configured": true, 00:10:47.914 "data_offset": 2048, 00:10:47.914 "data_size": 63488 00:10:47.914 }, 00:10:47.914 { 00:10:47.914 "name": "pt4", 00:10:47.914 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:47.914 "is_configured": true, 00:10:47.914 "data_offset": 2048, 00:10:47.914 "data_size": 63488 00:10:47.914 } 00:10:47.914 ] 00:10:47.914 }' 00:10:47.914 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.914 12:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.487 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:48.487 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:48.487 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:48.487 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:48.487 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:48.487 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:48.487 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:48.487 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:48.487 12:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.487 12:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.487 [2024-11-19 12:02:51.575319] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:48.487 12:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.487 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:48.487 "name": "raid_bdev1", 00:10:48.487 "aliases": [ 00:10:48.487 "325c2ebf-631f-411f-b6eb-38db0904f34e" 00:10:48.487 ], 00:10:48.487 "product_name": "Raid Volume", 00:10:48.487 "block_size": 512, 00:10:48.487 "num_blocks": 253952, 00:10:48.487 "uuid": "325c2ebf-631f-411f-b6eb-38db0904f34e", 00:10:48.487 "assigned_rate_limits": { 00:10:48.487 "rw_ios_per_sec": 0, 00:10:48.487 "rw_mbytes_per_sec": 0, 00:10:48.487 "r_mbytes_per_sec": 0, 00:10:48.487 "w_mbytes_per_sec": 0 00:10:48.487 }, 00:10:48.487 "claimed": false, 00:10:48.487 "zoned": false, 00:10:48.487 "supported_io_types": { 00:10:48.487 "read": true, 00:10:48.487 "write": true, 00:10:48.487 "unmap": true, 00:10:48.487 "flush": true, 00:10:48.487 "reset": true, 00:10:48.487 "nvme_admin": false, 00:10:48.487 "nvme_io": false, 00:10:48.487 "nvme_io_md": false, 00:10:48.487 "write_zeroes": true, 00:10:48.487 "zcopy": false, 00:10:48.487 "get_zone_info": false, 00:10:48.487 "zone_management": false, 00:10:48.487 "zone_append": false, 00:10:48.487 "compare": false, 00:10:48.487 "compare_and_write": false, 00:10:48.487 "abort": false, 00:10:48.487 "seek_hole": false, 00:10:48.487 "seek_data": false, 00:10:48.487 "copy": false, 00:10:48.487 "nvme_iov_md": false 00:10:48.487 }, 00:10:48.487 "memory_domains": [ 00:10:48.487 { 00:10:48.487 "dma_device_id": "system", 00:10:48.487 "dma_device_type": 1 00:10:48.487 }, 00:10:48.487 { 00:10:48.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.487 "dma_device_type": 2 00:10:48.487 }, 00:10:48.487 { 00:10:48.487 "dma_device_id": "system", 00:10:48.487 "dma_device_type": 1 00:10:48.487 }, 00:10:48.487 { 00:10:48.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.487 "dma_device_type": 2 00:10:48.487 }, 00:10:48.487 { 00:10:48.487 "dma_device_id": "system", 00:10:48.487 "dma_device_type": 1 00:10:48.487 }, 00:10:48.487 { 00:10:48.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.487 "dma_device_type": 2 00:10:48.487 }, 00:10:48.487 { 00:10:48.487 "dma_device_id": "system", 00:10:48.487 "dma_device_type": 1 00:10:48.487 }, 00:10:48.487 { 00:10:48.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.487 "dma_device_type": 2 00:10:48.487 } 00:10:48.487 ], 00:10:48.487 "driver_specific": { 00:10:48.487 "raid": { 00:10:48.487 "uuid": "325c2ebf-631f-411f-b6eb-38db0904f34e", 00:10:48.487 "strip_size_kb": 64, 00:10:48.487 "state": "online", 00:10:48.487 "raid_level": "raid0", 00:10:48.487 "superblock": true, 00:10:48.487 "num_base_bdevs": 4, 00:10:48.487 "num_base_bdevs_discovered": 4, 00:10:48.487 "num_base_bdevs_operational": 4, 00:10:48.487 "base_bdevs_list": [ 00:10:48.487 { 00:10:48.487 "name": "pt1", 00:10:48.487 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:48.487 "is_configured": true, 00:10:48.488 "data_offset": 2048, 00:10:48.488 "data_size": 63488 00:10:48.488 }, 00:10:48.488 { 00:10:48.488 "name": "pt2", 00:10:48.488 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:48.488 "is_configured": true, 00:10:48.488 "data_offset": 2048, 00:10:48.488 "data_size": 63488 00:10:48.488 }, 00:10:48.488 { 00:10:48.488 "name": "pt3", 00:10:48.488 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:48.488 "is_configured": true, 00:10:48.488 "data_offset": 2048, 00:10:48.488 "data_size": 63488 00:10:48.488 }, 00:10:48.488 { 00:10:48.488 "name": "pt4", 00:10:48.488 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:48.488 "is_configured": true, 00:10:48.488 "data_offset": 2048, 00:10:48.488 "data_size": 63488 00:10:48.488 } 00:10:48.488 ] 00:10:48.488 } 00:10:48.488 } 00:10:48.488 }' 00:10:48.488 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:48.488 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:48.488 pt2 00:10:48.488 pt3 00:10:48.488 pt4' 00:10:48.488 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:48.488 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:48.488 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:48.488 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:48.488 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:48.488 12:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.488 12:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.488 12:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.488 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:48.488 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:48.488 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:48.488 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:48.488 12:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.488 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:48.488 12:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.488 12:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.488 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:48.488 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:48.488 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:48.488 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:48.488 12:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.488 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:48.488 12:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.488 12:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.488 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:48.488 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:48.488 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:48.488 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:48.488 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:48.488 12:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.488 12:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.488 12:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.488 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:48.488 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:48.488 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:48.488 12:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.488 12:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.488 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:48.488 [2024-11-19 12:02:51.858724] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:48.749 12:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.749 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=325c2ebf-631f-411f-b6eb-38db0904f34e 00:10:48.749 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 325c2ebf-631f-411f-b6eb-38db0904f34e ']' 00:10:48.749 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:48.749 12:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.749 12:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.749 [2024-11-19 12:02:51.906343] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:48.749 [2024-11-19 12:02:51.906371] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:48.749 [2024-11-19 12:02:51.906443] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:48.749 [2024-11-19 12:02:51.906506] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:48.749 [2024-11-19 12:02:51.906519] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:48.749 12:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.749 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.749 12:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.749 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:48.749 12:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.749 12:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.749 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:48.749 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:48.749 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:48.749 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:48.749 12:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.749 12:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.749 12:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.749 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:48.749 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:48.749 12:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.749 12:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.749 12:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.749 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:48.749 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:48.749 12:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.749 12:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.749 12:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.749 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:48.749 12:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:48.749 12:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.749 12:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.749 12:02:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.749 12:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:48.749 12:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:48.749 12:02:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.749 12:02:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.749 12:02:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.749 12:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:48.749 12:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:48.749 12:02:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:48.749 12:02:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:48.749 12:02:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:48.749 12:02:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:48.749 12:02:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:48.749 12:02:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:48.749 12:02:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:48.749 12:02:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.749 12:02:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.749 [2024-11-19 12:02:52.066134] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:48.749 [2024-11-19 12:02:52.068039] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:48.749 [2024-11-19 12:02:52.068126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:48.749 [2024-11-19 12:02:52.068179] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:48.749 [2024-11-19 12:02:52.068295] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:48.749 [2024-11-19 12:02:52.068344] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:48.749 [2024-11-19 12:02:52.068362] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:48.749 [2024-11-19 12:02:52.068380] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:48.749 [2024-11-19 12:02:52.068393] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:48.749 [2024-11-19 12:02:52.068405] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:48.749 request: 00:10:48.749 { 00:10:48.749 "name": "raid_bdev1", 00:10:48.749 "raid_level": "raid0", 00:10:48.749 "base_bdevs": [ 00:10:48.749 "malloc1", 00:10:48.749 "malloc2", 00:10:48.749 "malloc3", 00:10:48.749 "malloc4" 00:10:48.749 ], 00:10:48.749 "strip_size_kb": 64, 00:10:48.749 "superblock": false, 00:10:48.749 "method": "bdev_raid_create", 00:10:48.749 "req_id": 1 00:10:48.749 } 00:10:48.749 Got JSON-RPC error response 00:10:48.749 response: 00:10:48.749 { 00:10:48.749 "code": -17, 00:10:48.749 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:48.749 } 00:10:48.749 12:02:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:48.749 12:02:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:48.749 12:02:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:48.749 12:02:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:48.749 12:02:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:48.749 12:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.749 12:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:48.749 12:02:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.750 12:02:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.750 12:02:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.750 12:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:48.750 12:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:49.010 12:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:49.010 12:02:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.010 12:02:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.010 [2024-11-19 12:02:52.130011] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:49.010 [2024-11-19 12:02:52.130108] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:49.010 [2024-11-19 12:02:52.130138] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:49.010 [2024-11-19 12:02:52.130167] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:49.010 [2024-11-19 12:02:52.132288] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:49.010 [2024-11-19 12:02:52.132361] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:49.010 [2024-11-19 12:02:52.132448] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:49.010 [2024-11-19 12:02:52.132521] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:49.010 pt1 00:10:49.010 12:02:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.010 12:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:49.010 12:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:49.010 12:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.010 12:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:49.010 12:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.010 12:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:49.010 12:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.010 12:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.010 12:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.010 12:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.010 12:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.010 12:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:49.010 12:02:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.010 12:02:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.010 12:02:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.010 12:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.010 "name": "raid_bdev1", 00:10:49.010 "uuid": "325c2ebf-631f-411f-b6eb-38db0904f34e", 00:10:49.010 "strip_size_kb": 64, 00:10:49.010 "state": "configuring", 00:10:49.010 "raid_level": "raid0", 00:10:49.010 "superblock": true, 00:10:49.010 "num_base_bdevs": 4, 00:10:49.010 "num_base_bdevs_discovered": 1, 00:10:49.010 "num_base_bdevs_operational": 4, 00:10:49.010 "base_bdevs_list": [ 00:10:49.010 { 00:10:49.010 "name": "pt1", 00:10:49.010 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:49.010 "is_configured": true, 00:10:49.010 "data_offset": 2048, 00:10:49.010 "data_size": 63488 00:10:49.010 }, 00:10:49.010 { 00:10:49.010 "name": null, 00:10:49.010 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:49.010 "is_configured": false, 00:10:49.010 "data_offset": 2048, 00:10:49.010 "data_size": 63488 00:10:49.010 }, 00:10:49.010 { 00:10:49.010 "name": null, 00:10:49.010 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:49.010 "is_configured": false, 00:10:49.010 "data_offset": 2048, 00:10:49.010 "data_size": 63488 00:10:49.010 }, 00:10:49.010 { 00:10:49.010 "name": null, 00:10:49.010 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:49.010 "is_configured": false, 00:10:49.010 "data_offset": 2048, 00:10:49.010 "data_size": 63488 00:10:49.010 } 00:10:49.010 ] 00:10:49.010 }' 00:10:49.010 12:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.010 12:02:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.271 12:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:49.271 12:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:49.271 12:02:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.271 12:02:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.271 [2024-11-19 12:02:52.597245] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:49.271 [2024-11-19 12:02:52.597328] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:49.271 [2024-11-19 12:02:52.597347] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:49.271 [2024-11-19 12:02:52.597359] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:49.271 [2024-11-19 12:02:52.597770] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:49.271 [2024-11-19 12:02:52.597807] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:49.271 [2024-11-19 12:02:52.597901] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:49.271 [2024-11-19 12:02:52.597925] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:49.271 pt2 00:10:49.271 12:02:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.271 12:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:49.271 12:02:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.271 12:02:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.271 [2024-11-19 12:02:52.609210] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:49.271 12:02:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.271 12:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:49.271 12:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:49.271 12:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.271 12:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:49.271 12:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.271 12:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:49.271 12:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.271 12:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.271 12:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.271 12:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.271 12:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.271 12:02:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.271 12:02:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.271 12:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:49.271 12:02:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.532 12:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.532 "name": "raid_bdev1", 00:10:49.532 "uuid": "325c2ebf-631f-411f-b6eb-38db0904f34e", 00:10:49.532 "strip_size_kb": 64, 00:10:49.532 "state": "configuring", 00:10:49.532 "raid_level": "raid0", 00:10:49.532 "superblock": true, 00:10:49.532 "num_base_bdevs": 4, 00:10:49.532 "num_base_bdevs_discovered": 1, 00:10:49.532 "num_base_bdevs_operational": 4, 00:10:49.532 "base_bdevs_list": [ 00:10:49.532 { 00:10:49.532 "name": "pt1", 00:10:49.532 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:49.532 "is_configured": true, 00:10:49.532 "data_offset": 2048, 00:10:49.532 "data_size": 63488 00:10:49.532 }, 00:10:49.532 { 00:10:49.532 "name": null, 00:10:49.532 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:49.532 "is_configured": false, 00:10:49.532 "data_offset": 0, 00:10:49.532 "data_size": 63488 00:10:49.532 }, 00:10:49.532 { 00:10:49.532 "name": null, 00:10:49.532 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:49.532 "is_configured": false, 00:10:49.532 "data_offset": 2048, 00:10:49.532 "data_size": 63488 00:10:49.532 }, 00:10:49.532 { 00:10:49.532 "name": null, 00:10:49.532 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:49.532 "is_configured": false, 00:10:49.532 "data_offset": 2048, 00:10:49.532 "data_size": 63488 00:10:49.533 } 00:10:49.533 ] 00:10:49.533 }' 00:10:49.533 12:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.533 12:02:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.793 12:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:49.794 12:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:49.794 12:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:49.794 12:02:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.794 12:02:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.794 [2024-11-19 12:02:53.100385] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:49.794 [2024-11-19 12:02:53.100557] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:49.794 [2024-11-19 12:02:53.100582] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:49.794 [2024-11-19 12:02:53.100592] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:49.794 [2024-11-19 12:02:53.101041] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:49.794 [2024-11-19 12:02:53.101059] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:49.794 [2024-11-19 12:02:53.101140] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:49.794 [2024-11-19 12:02:53.101164] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:49.794 pt2 00:10:49.794 12:02:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.794 12:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:49.794 12:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:49.794 12:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:49.794 12:02:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.794 12:02:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.794 [2024-11-19 12:02:53.108313] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:49.794 [2024-11-19 12:02:53.108363] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:49.794 [2024-11-19 12:02:53.108385] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:49.794 [2024-11-19 12:02:53.108406] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:49.794 [2024-11-19 12:02:53.108748] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:49.794 [2024-11-19 12:02:53.108763] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:49.794 [2024-11-19 12:02:53.108820] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:49.794 [2024-11-19 12:02:53.108836] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:49.794 pt3 00:10:49.794 12:02:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.794 12:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:49.794 12:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:49.794 12:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:49.794 12:02:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.794 12:02:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.794 [2024-11-19 12:02:53.116283] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:49.794 [2024-11-19 12:02:53.116333] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:49.794 [2024-11-19 12:02:53.116352] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:10:49.794 [2024-11-19 12:02:53.116360] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:49.794 [2024-11-19 12:02:53.116722] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:49.794 [2024-11-19 12:02:53.116737] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:49.794 [2024-11-19 12:02:53.116795] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:49.794 [2024-11-19 12:02:53.116811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:49.794 [2024-11-19 12:02:53.116970] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:49.794 [2024-11-19 12:02:53.116978] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:49.794 [2024-11-19 12:02:53.117220] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:49.794 [2024-11-19 12:02:53.117362] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:49.794 [2024-11-19 12:02:53.117375] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:49.794 [2024-11-19 12:02:53.117506] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:49.794 pt4 00:10:49.794 12:02:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.794 12:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:49.794 12:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:49.794 12:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:49.794 12:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:49.794 12:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:49.794 12:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:49.794 12:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.794 12:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:49.794 12:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.794 12:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.794 12:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.794 12:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.794 12:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.794 12:02:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.794 12:02:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.794 12:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:49.794 12:02:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.055 12:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.055 "name": "raid_bdev1", 00:10:50.055 "uuid": "325c2ebf-631f-411f-b6eb-38db0904f34e", 00:10:50.055 "strip_size_kb": 64, 00:10:50.055 "state": "online", 00:10:50.055 "raid_level": "raid0", 00:10:50.055 "superblock": true, 00:10:50.055 "num_base_bdevs": 4, 00:10:50.055 "num_base_bdevs_discovered": 4, 00:10:50.055 "num_base_bdevs_operational": 4, 00:10:50.055 "base_bdevs_list": [ 00:10:50.055 { 00:10:50.055 "name": "pt1", 00:10:50.055 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:50.055 "is_configured": true, 00:10:50.055 "data_offset": 2048, 00:10:50.055 "data_size": 63488 00:10:50.055 }, 00:10:50.055 { 00:10:50.055 "name": "pt2", 00:10:50.055 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:50.055 "is_configured": true, 00:10:50.055 "data_offset": 2048, 00:10:50.055 "data_size": 63488 00:10:50.055 }, 00:10:50.055 { 00:10:50.055 "name": "pt3", 00:10:50.055 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:50.055 "is_configured": true, 00:10:50.055 "data_offset": 2048, 00:10:50.055 "data_size": 63488 00:10:50.055 }, 00:10:50.055 { 00:10:50.055 "name": "pt4", 00:10:50.055 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:50.055 "is_configured": true, 00:10:50.055 "data_offset": 2048, 00:10:50.055 "data_size": 63488 00:10:50.055 } 00:10:50.055 ] 00:10:50.055 }' 00:10:50.055 12:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.055 12:02:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.316 12:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:50.316 12:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:50.316 12:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:50.316 12:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:50.316 12:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:50.316 12:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:50.316 12:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:50.316 12:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:50.316 12:02:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.316 12:02:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.316 [2024-11-19 12:02:53.599860] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:50.316 12:02:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.316 12:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:50.316 "name": "raid_bdev1", 00:10:50.316 "aliases": [ 00:10:50.316 "325c2ebf-631f-411f-b6eb-38db0904f34e" 00:10:50.316 ], 00:10:50.316 "product_name": "Raid Volume", 00:10:50.316 "block_size": 512, 00:10:50.316 "num_blocks": 253952, 00:10:50.316 "uuid": "325c2ebf-631f-411f-b6eb-38db0904f34e", 00:10:50.316 "assigned_rate_limits": { 00:10:50.316 "rw_ios_per_sec": 0, 00:10:50.316 "rw_mbytes_per_sec": 0, 00:10:50.316 "r_mbytes_per_sec": 0, 00:10:50.316 "w_mbytes_per_sec": 0 00:10:50.316 }, 00:10:50.316 "claimed": false, 00:10:50.316 "zoned": false, 00:10:50.316 "supported_io_types": { 00:10:50.316 "read": true, 00:10:50.316 "write": true, 00:10:50.316 "unmap": true, 00:10:50.316 "flush": true, 00:10:50.316 "reset": true, 00:10:50.316 "nvme_admin": false, 00:10:50.316 "nvme_io": false, 00:10:50.316 "nvme_io_md": false, 00:10:50.316 "write_zeroes": true, 00:10:50.316 "zcopy": false, 00:10:50.316 "get_zone_info": false, 00:10:50.316 "zone_management": false, 00:10:50.316 "zone_append": false, 00:10:50.316 "compare": false, 00:10:50.316 "compare_and_write": false, 00:10:50.316 "abort": false, 00:10:50.316 "seek_hole": false, 00:10:50.316 "seek_data": false, 00:10:50.316 "copy": false, 00:10:50.316 "nvme_iov_md": false 00:10:50.316 }, 00:10:50.316 "memory_domains": [ 00:10:50.316 { 00:10:50.316 "dma_device_id": "system", 00:10:50.316 "dma_device_type": 1 00:10:50.316 }, 00:10:50.316 { 00:10:50.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.316 "dma_device_type": 2 00:10:50.316 }, 00:10:50.316 { 00:10:50.316 "dma_device_id": "system", 00:10:50.316 "dma_device_type": 1 00:10:50.316 }, 00:10:50.316 { 00:10:50.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.316 "dma_device_type": 2 00:10:50.316 }, 00:10:50.316 { 00:10:50.316 "dma_device_id": "system", 00:10:50.316 "dma_device_type": 1 00:10:50.316 }, 00:10:50.316 { 00:10:50.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.316 "dma_device_type": 2 00:10:50.316 }, 00:10:50.316 { 00:10:50.316 "dma_device_id": "system", 00:10:50.316 "dma_device_type": 1 00:10:50.316 }, 00:10:50.316 { 00:10:50.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.316 "dma_device_type": 2 00:10:50.316 } 00:10:50.316 ], 00:10:50.316 "driver_specific": { 00:10:50.316 "raid": { 00:10:50.316 "uuid": "325c2ebf-631f-411f-b6eb-38db0904f34e", 00:10:50.316 "strip_size_kb": 64, 00:10:50.316 "state": "online", 00:10:50.316 "raid_level": "raid0", 00:10:50.316 "superblock": true, 00:10:50.316 "num_base_bdevs": 4, 00:10:50.316 "num_base_bdevs_discovered": 4, 00:10:50.316 "num_base_bdevs_operational": 4, 00:10:50.316 "base_bdevs_list": [ 00:10:50.316 { 00:10:50.316 "name": "pt1", 00:10:50.316 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:50.316 "is_configured": true, 00:10:50.316 "data_offset": 2048, 00:10:50.316 "data_size": 63488 00:10:50.316 }, 00:10:50.316 { 00:10:50.316 "name": "pt2", 00:10:50.316 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:50.316 "is_configured": true, 00:10:50.316 "data_offset": 2048, 00:10:50.316 "data_size": 63488 00:10:50.316 }, 00:10:50.316 { 00:10:50.316 "name": "pt3", 00:10:50.316 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:50.316 "is_configured": true, 00:10:50.316 "data_offset": 2048, 00:10:50.316 "data_size": 63488 00:10:50.316 }, 00:10:50.316 { 00:10:50.316 "name": "pt4", 00:10:50.316 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:50.316 "is_configured": true, 00:10:50.316 "data_offset": 2048, 00:10:50.316 "data_size": 63488 00:10:50.316 } 00:10:50.316 ] 00:10:50.316 } 00:10:50.316 } 00:10:50.316 }' 00:10:50.316 12:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:50.316 12:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:50.316 pt2 00:10:50.316 pt3 00:10:50.316 pt4' 00:10:50.316 12:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.577 12:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:50.577 12:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:50.577 12:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:50.577 12:02:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.577 12:02:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.577 12:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.577 12:02:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.577 12:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:50.577 12:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:50.577 12:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:50.577 12:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.577 12:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:50.577 12:02:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.577 12:02:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.577 12:02:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.577 12:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:50.577 12:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:50.577 12:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:50.577 12:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:50.577 12:02:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.577 12:02:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.577 12:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.577 12:02:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.577 12:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:50.577 12:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:50.577 12:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:50.577 12:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:50.577 12:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.577 12:02:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.577 12:02:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.577 12:02:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.577 12:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:50.577 12:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:50.577 12:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:50.577 12:02:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.577 12:02:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.577 12:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:50.577 [2024-11-19 12:02:53.919337] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:50.577 12:02:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.837 12:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 325c2ebf-631f-411f-b6eb-38db0904f34e '!=' 325c2ebf-631f-411f-b6eb-38db0904f34e ']' 00:10:50.837 12:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:10:50.837 12:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:50.837 12:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:50.837 12:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70741 00:10:50.837 12:02:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 70741 ']' 00:10:50.838 12:02:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 70741 00:10:50.838 12:02:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:50.838 12:02:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:50.838 12:02:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70741 00:10:50.838 killing process with pid 70741 00:10:50.838 12:02:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:50.838 12:02:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:50.838 12:02:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70741' 00:10:50.838 12:02:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 70741 00:10:50.838 [2024-11-19 12:02:54.001327] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:50.838 [2024-11-19 12:02:54.001412] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:50.838 [2024-11-19 12:02:54.001478] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:50.838 [2024-11-19 12:02:54.001487] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:50.838 12:02:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 70741 00:10:51.097 [2024-11-19 12:02:54.390157] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:52.479 12:02:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:52.479 00:10:52.479 real 0m5.509s 00:10:52.479 user 0m7.937s 00:10:52.479 sys 0m0.908s 00:10:52.479 12:02:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:52.479 ************************************ 00:10:52.479 END TEST raid_superblock_test 00:10:52.479 ************************************ 00:10:52.479 12:02:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.479 12:02:55 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:10:52.479 12:02:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:52.479 12:02:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:52.479 12:02:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:52.479 ************************************ 00:10:52.479 START TEST raid_read_error_test 00:10:52.479 ************************************ 00:10:52.479 12:02:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:10:52.479 12:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:52.479 12:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:52.479 12:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:52.479 12:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:52.479 12:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:52.479 12:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:52.479 12:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:52.479 12:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:52.479 12:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:52.479 12:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:52.479 12:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:52.479 12:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:52.479 12:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:52.479 12:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:52.479 12:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:52.479 12:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:52.479 12:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:52.479 12:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:52.479 12:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:52.479 12:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:52.479 12:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:52.479 12:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:52.479 12:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:52.479 12:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:52.479 12:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:52.479 12:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:52.480 12:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:52.480 12:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:52.480 12:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.pVlNQ581kT 00:10:52.480 12:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71001 00:10:52.480 12:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:52.480 12:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71001 00:10:52.480 12:02:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 71001 ']' 00:10:52.480 12:02:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:52.480 12:02:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:52.480 12:02:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:52.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:52.480 12:02:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:52.480 12:02:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.480 [2024-11-19 12:02:55.635659] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:10:52.480 [2024-11-19 12:02:55.635914] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71001 ] 00:10:52.480 [2024-11-19 12:02:55.819382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.739 [2024-11-19 12:02:55.933790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.998 [2024-11-19 12:02:56.127942] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:52.998 [2024-11-19 12:02:56.127998] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:53.258 12:02:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:53.258 12:02:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:53.258 12:02:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:53.258 12:02:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:53.258 12:02:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.258 12:02:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.258 BaseBdev1_malloc 00:10:53.258 12:02:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.258 12:02:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:53.258 12:02:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.258 12:02:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.258 true 00:10:53.258 12:02:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.258 12:02:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:53.258 12:02:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.258 12:02:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.258 [2024-11-19 12:02:56.538294] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:53.258 [2024-11-19 12:02:56.538366] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:53.258 [2024-11-19 12:02:56.538385] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:53.258 [2024-11-19 12:02:56.538396] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:53.258 [2024-11-19 12:02:56.540448] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:53.258 [2024-11-19 12:02:56.540589] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:53.258 BaseBdev1 00:10:53.258 12:02:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.258 12:02:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:53.258 12:02:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:53.258 12:02:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.258 12:02:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.258 BaseBdev2_malloc 00:10:53.258 12:02:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.258 12:02:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:53.258 12:02:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.258 12:02:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.258 true 00:10:53.258 12:02:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.258 12:02:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:53.258 12:02:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.258 12:02:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.258 [2024-11-19 12:02:56.596981] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:53.258 [2024-11-19 12:02:56.597148] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:53.258 [2024-11-19 12:02:56.597173] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:53.258 [2024-11-19 12:02:56.597185] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:53.258 [2024-11-19 12:02:56.599360] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:53.258 [2024-11-19 12:02:56.599402] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:53.258 BaseBdev2 00:10:53.258 12:02:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.258 12:02:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:53.258 12:02:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:53.258 12:02:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.258 12:02:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.518 BaseBdev3_malloc 00:10:53.518 12:02:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.518 12:02:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:53.518 12:02:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.518 12:02:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.518 true 00:10:53.518 12:02:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.518 12:02:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:53.518 12:02:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.518 12:02:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.518 [2024-11-19 12:02:56.673494] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:53.518 [2024-11-19 12:02:56.673556] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:53.518 [2024-11-19 12:02:56.673573] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:53.518 [2024-11-19 12:02:56.673582] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:53.518 [2024-11-19 12:02:56.675560] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:53.518 [2024-11-19 12:02:56.675602] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:53.518 BaseBdev3 00:10:53.518 12:02:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.518 12:02:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:53.518 12:02:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:53.518 12:02:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.518 12:02:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.518 BaseBdev4_malloc 00:10:53.518 12:02:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.518 12:02:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:53.518 12:02:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.518 12:02:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.518 true 00:10:53.518 12:02:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.518 12:02:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:53.518 12:02:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.518 12:02:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.518 [2024-11-19 12:02:56.738438] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:53.518 [2024-11-19 12:02:56.738507] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:53.518 [2024-11-19 12:02:56.738524] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:53.518 [2024-11-19 12:02:56.738534] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:53.518 [2024-11-19 12:02:56.740594] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:53.518 [2024-11-19 12:02:56.740634] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:53.518 BaseBdev4 00:10:53.518 12:02:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.518 12:02:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:53.518 12:02:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.518 12:02:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.518 [2024-11-19 12:02:56.750473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:53.518 [2024-11-19 12:02:56.752295] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:53.518 [2024-11-19 12:02:56.752371] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:53.518 [2024-11-19 12:02:56.752433] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:53.518 [2024-11-19 12:02:56.752652] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:10:53.518 [2024-11-19 12:02:56.752667] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:53.518 [2024-11-19 12:02:56.752894] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:10:53.518 [2024-11-19 12:02:56.753046] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:10:53.518 [2024-11-19 12:02:56.753057] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:10:53.518 [2024-11-19 12:02:56.753203] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:53.518 12:02:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.518 12:02:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:53.518 12:02:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:53.518 12:02:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:53.518 12:02:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:53.518 12:02:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:53.518 12:02:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:53.518 12:02:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.518 12:02:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.518 12:02:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.518 12:02:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.518 12:02:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.518 12:02:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:53.518 12:02:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.518 12:02:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.518 12:02:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.518 12:02:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.518 "name": "raid_bdev1", 00:10:53.518 "uuid": "26ddb090-2ce2-41cc-8323-b301259c4a08", 00:10:53.518 "strip_size_kb": 64, 00:10:53.519 "state": "online", 00:10:53.519 "raid_level": "raid0", 00:10:53.519 "superblock": true, 00:10:53.519 "num_base_bdevs": 4, 00:10:53.519 "num_base_bdevs_discovered": 4, 00:10:53.519 "num_base_bdevs_operational": 4, 00:10:53.519 "base_bdevs_list": [ 00:10:53.519 { 00:10:53.519 "name": "BaseBdev1", 00:10:53.519 "uuid": "6f54bf25-ae11-5431-abb7-97bd1fc58991", 00:10:53.519 "is_configured": true, 00:10:53.519 "data_offset": 2048, 00:10:53.519 "data_size": 63488 00:10:53.519 }, 00:10:53.519 { 00:10:53.519 "name": "BaseBdev2", 00:10:53.519 "uuid": "aca46983-9239-55dd-9555-28d426c667f7", 00:10:53.519 "is_configured": true, 00:10:53.519 "data_offset": 2048, 00:10:53.519 "data_size": 63488 00:10:53.519 }, 00:10:53.519 { 00:10:53.519 "name": "BaseBdev3", 00:10:53.519 "uuid": "27f2fb39-b73f-5fdb-9927-38f523f0664c", 00:10:53.519 "is_configured": true, 00:10:53.519 "data_offset": 2048, 00:10:53.519 "data_size": 63488 00:10:53.519 }, 00:10:53.519 { 00:10:53.519 "name": "BaseBdev4", 00:10:53.519 "uuid": "bf28a112-de0e-53b4-a8b2-b3ca157ae761", 00:10:53.519 "is_configured": true, 00:10:53.519 "data_offset": 2048, 00:10:53.519 "data_size": 63488 00:10:53.519 } 00:10:53.519 ] 00:10:53.519 }' 00:10:53.519 12:02:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.519 12:02:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.088 12:02:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:54.088 12:02:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:54.088 [2024-11-19 12:02:57.259133] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:10:55.025 12:02:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:55.025 12:02:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.025 12:02:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.025 12:02:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.025 12:02:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:55.025 12:02:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:55.025 12:02:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:55.025 12:02:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:55.025 12:02:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:55.025 12:02:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:55.025 12:02:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:55.025 12:02:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.025 12:02:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:55.025 12:02:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.025 12:02:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.025 12:02:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.025 12:02:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.025 12:02:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.025 12:02:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.025 12:02:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:55.025 12:02:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.026 12:02:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.026 12:02:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.026 "name": "raid_bdev1", 00:10:55.026 "uuid": "26ddb090-2ce2-41cc-8323-b301259c4a08", 00:10:55.026 "strip_size_kb": 64, 00:10:55.026 "state": "online", 00:10:55.026 "raid_level": "raid0", 00:10:55.026 "superblock": true, 00:10:55.026 "num_base_bdevs": 4, 00:10:55.026 "num_base_bdevs_discovered": 4, 00:10:55.026 "num_base_bdevs_operational": 4, 00:10:55.026 "base_bdevs_list": [ 00:10:55.026 { 00:10:55.026 "name": "BaseBdev1", 00:10:55.026 "uuid": "6f54bf25-ae11-5431-abb7-97bd1fc58991", 00:10:55.026 "is_configured": true, 00:10:55.026 "data_offset": 2048, 00:10:55.026 "data_size": 63488 00:10:55.026 }, 00:10:55.026 { 00:10:55.026 "name": "BaseBdev2", 00:10:55.026 "uuid": "aca46983-9239-55dd-9555-28d426c667f7", 00:10:55.026 "is_configured": true, 00:10:55.026 "data_offset": 2048, 00:10:55.026 "data_size": 63488 00:10:55.026 }, 00:10:55.026 { 00:10:55.026 "name": "BaseBdev3", 00:10:55.026 "uuid": "27f2fb39-b73f-5fdb-9927-38f523f0664c", 00:10:55.026 "is_configured": true, 00:10:55.026 "data_offset": 2048, 00:10:55.026 "data_size": 63488 00:10:55.026 }, 00:10:55.026 { 00:10:55.026 "name": "BaseBdev4", 00:10:55.026 "uuid": "bf28a112-de0e-53b4-a8b2-b3ca157ae761", 00:10:55.026 "is_configured": true, 00:10:55.026 "data_offset": 2048, 00:10:55.026 "data_size": 63488 00:10:55.026 } 00:10:55.026 ] 00:10:55.026 }' 00:10:55.026 12:02:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.026 12:02:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.285 12:02:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:55.285 12:02:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.285 12:02:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.285 [2024-11-19 12:02:58.630937] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:55.285 [2024-11-19 12:02:58.631176] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:55.285 [2024-11-19 12:02:58.633891] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:55.285 [2024-11-19 12:02:58.634000] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:55.285 [2024-11-19 12:02:58.634069] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:55.285 [2024-11-19 12:02:58.634117] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:10:55.285 12:02:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.285 { 00:10:55.285 "results": [ 00:10:55.285 { 00:10:55.285 "job": "raid_bdev1", 00:10:55.285 "core_mask": "0x1", 00:10:55.285 "workload": "randrw", 00:10:55.285 "percentage": 50, 00:10:55.285 "status": "finished", 00:10:55.285 "queue_depth": 1, 00:10:55.285 "io_size": 131072, 00:10:55.285 "runtime": 1.372706, 00:10:55.285 "iops": 16116.342465174626, 00:10:55.285 "mibps": 2014.5428081468283, 00:10:55.285 "io_failed": 1, 00:10:55.285 "io_timeout": 0, 00:10:55.285 "avg_latency_us": 86.50997892782168, 00:10:55.285 "min_latency_us": 24.929257641921396, 00:10:55.285 "max_latency_us": 1366.5257641921398 00:10:55.285 } 00:10:55.285 ], 00:10:55.285 "core_count": 1 00:10:55.285 } 00:10:55.285 12:02:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71001 00:10:55.285 12:02:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 71001 ']' 00:10:55.285 12:02:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 71001 00:10:55.285 12:02:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:55.285 12:02:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:55.285 12:02:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71001 00:10:55.544 12:02:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:55.544 12:02:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:55.544 12:02:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71001' 00:10:55.544 killing process with pid 71001 00:10:55.544 12:02:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 71001 00:10:55.544 [2024-11-19 12:02:58.682021] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:55.544 12:02:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 71001 00:10:55.803 [2024-11-19 12:02:59.004527] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:57.180 12:03:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.pVlNQ581kT 00:10:57.180 12:03:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:57.180 12:03:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:57.180 12:03:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:10:57.180 12:03:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:57.180 12:03:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:57.180 12:03:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:57.180 12:03:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:10:57.180 00:10:57.180 real 0m4.636s 00:10:57.180 user 0m5.463s 00:10:57.180 sys 0m0.586s 00:10:57.180 12:03:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:57.180 ************************************ 00:10:57.180 END TEST raid_read_error_test 00:10:57.180 ************************************ 00:10:57.180 12:03:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.180 12:03:00 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:10:57.180 12:03:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:57.180 12:03:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:57.180 12:03:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:57.180 ************************************ 00:10:57.180 START TEST raid_write_error_test 00:10:57.181 ************************************ 00:10:57.181 12:03:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:10:57.181 12:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:57.181 12:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:57.181 12:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:57.181 12:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:57.181 12:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:57.181 12:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:57.181 12:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:57.181 12:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:57.181 12:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:57.181 12:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:57.181 12:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:57.181 12:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:57.181 12:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:57.181 12:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:57.181 12:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:57.181 12:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:57.181 12:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:57.181 12:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:57.181 12:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:57.181 12:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:57.181 12:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:57.181 12:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:57.181 12:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:57.181 12:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:57.181 12:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:57.181 12:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:57.181 12:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:57.181 12:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:57.181 12:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.BINZQBnctQ 00:10:57.181 12:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71147 00:10:57.181 12:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:57.181 12:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71147 00:10:57.181 12:03:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 71147 ']' 00:10:57.181 12:03:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:57.181 12:03:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:57.181 12:03:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:57.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:57.181 12:03:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:57.181 12:03:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.181 [2024-11-19 12:03:00.344779] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:10:57.181 [2024-11-19 12:03:00.344991] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71147 ] 00:10:57.181 [2024-11-19 12:03:00.523285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:57.440 [2024-11-19 12:03:00.641559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.700 [2024-11-19 12:03:00.842045] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:57.700 [2024-11-19 12:03:00.842102] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:57.960 12:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:57.960 12:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:57.960 12:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:57.960 12:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:57.960 12:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.960 12:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.960 BaseBdev1_malloc 00:10:57.960 12:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.960 12:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:57.960 12:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.960 12:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.960 true 00:10:57.960 12:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.960 12:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:57.960 12:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.960 12:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.960 [2024-11-19 12:03:01.249334] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:57.960 [2024-11-19 12:03:01.249401] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:57.960 [2024-11-19 12:03:01.249421] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:57.960 [2024-11-19 12:03:01.249432] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:57.960 [2024-11-19 12:03:01.251435] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:57.960 [2024-11-19 12:03:01.251564] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:57.960 BaseBdev1 00:10:57.960 12:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.960 12:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:57.960 12:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:57.960 12:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.960 12:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.960 BaseBdev2_malloc 00:10:57.960 12:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.960 12:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:57.960 12:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.960 12:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.960 true 00:10:57.960 12:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.960 12:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:57.960 12:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.960 12:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.960 [2024-11-19 12:03:01.312394] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:57.960 [2024-11-19 12:03:01.312457] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:57.960 [2024-11-19 12:03:01.312473] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:57.960 [2024-11-19 12:03:01.312483] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:57.960 [2024-11-19 12:03:01.314443] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:57.960 [2024-11-19 12:03:01.314571] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:57.960 BaseBdev2 00:10:57.960 12:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.960 12:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:57.960 12:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:57.960 12:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.960 12:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.221 BaseBdev3_malloc 00:10:58.221 12:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.221 12:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:58.221 12:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.221 12:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.221 true 00:10:58.221 12:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.221 12:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:58.221 12:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.221 12:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.221 [2024-11-19 12:03:01.389760] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:58.221 [2024-11-19 12:03:01.389820] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.221 [2024-11-19 12:03:01.389837] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:58.221 [2024-11-19 12:03:01.389847] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.221 [2024-11-19 12:03:01.391839] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.221 [2024-11-19 12:03:01.391960] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:58.221 BaseBdev3 00:10:58.221 12:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.221 12:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:58.221 12:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:58.221 12:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.221 12:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.221 BaseBdev4_malloc 00:10:58.221 12:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.221 12:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:58.221 12:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.221 12:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.221 true 00:10:58.221 12:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.221 12:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:58.221 12:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.221 12:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.221 [2024-11-19 12:03:01.453907] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:58.221 [2024-11-19 12:03:01.453971] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.221 [2024-11-19 12:03:01.453988] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:58.221 [2024-11-19 12:03:01.454012] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.221 [2024-11-19 12:03:01.456081] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.221 [2024-11-19 12:03:01.456120] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:58.221 BaseBdev4 00:10:58.221 12:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.221 12:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:58.221 12:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.221 12:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.221 [2024-11-19 12:03:01.465946] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:58.221 [2024-11-19 12:03:01.467816] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:58.221 [2024-11-19 12:03:01.467887] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:58.221 [2024-11-19 12:03:01.467948] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:58.221 [2024-11-19 12:03:01.468174] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:10:58.221 [2024-11-19 12:03:01.468194] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:58.221 [2024-11-19 12:03:01.468419] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:10:58.221 [2024-11-19 12:03:01.468587] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:10:58.221 [2024-11-19 12:03:01.468598] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:10:58.221 [2024-11-19 12:03:01.468754] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:58.221 12:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.221 12:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:58.221 12:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:58.221 12:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:58.221 12:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:58.221 12:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:58.221 12:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:58.221 12:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.221 12:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.221 12:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.221 12:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.221 12:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.221 12:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.221 12:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:58.221 12:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.221 12:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.221 12:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.221 "name": "raid_bdev1", 00:10:58.221 "uuid": "cd1f2b82-a034-49ce-9459-1a821ad8565c", 00:10:58.221 "strip_size_kb": 64, 00:10:58.221 "state": "online", 00:10:58.221 "raid_level": "raid0", 00:10:58.221 "superblock": true, 00:10:58.221 "num_base_bdevs": 4, 00:10:58.221 "num_base_bdevs_discovered": 4, 00:10:58.221 "num_base_bdevs_operational": 4, 00:10:58.221 "base_bdevs_list": [ 00:10:58.221 { 00:10:58.221 "name": "BaseBdev1", 00:10:58.221 "uuid": "c15a2df7-52ff-52e1-bf89-b96e73957330", 00:10:58.221 "is_configured": true, 00:10:58.221 "data_offset": 2048, 00:10:58.221 "data_size": 63488 00:10:58.221 }, 00:10:58.221 { 00:10:58.221 "name": "BaseBdev2", 00:10:58.221 "uuid": "86a679a7-abc0-58c3-9316-1d55f69a2bfe", 00:10:58.221 "is_configured": true, 00:10:58.221 "data_offset": 2048, 00:10:58.221 "data_size": 63488 00:10:58.221 }, 00:10:58.221 { 00:10:58.221 "name": "BaseBdev3", 00:10:58.221 "uuid": "946c8035-6696-52bc-a16e-2f1fc9b7210a", 00:10:58.221 "is_configured": true, 00:10:58.221 "data_offset": 2048, 00:10:58.222 "data_size": 63488 00:10:58.222 }, 00:10:58.222 { 00:10:58.222 "name": "BaseBdev4", 00:10:58.222 "uuid": "e66797ab-96e4-567b-b19e-965b37a209f0", 00:10:58.222 "is_configured": true, 00:10:58.222 "data_offset": 2048, 00:10:58.222 "data_size": 63488 00:10:58.222 } 00:10:58.222 ] 00:10:58.222 }' 00:10:58.222 12:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.222 12:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.790 12:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:58.790 12:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:58.790 [2024-11-19 12:03:01.966459] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:10:59.728 12:03:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:59.729 12:03:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.729 12:03:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.729 12:03:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.729 12:03:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:59.729 12:03:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:59.729 12:03:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:59.729 12:03:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:59.729 12:03:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:59.729 12:03:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:59.729 12:03:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:59.729 12:03:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:59.729 12:03:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:59.729 12:03:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.729 12:03:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.729 12:03:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.729 12:03:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.729 12:03:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.729 12:03:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:59.729 12:03:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.729 12:03:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.729 12:03:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.729 12:03:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.729 "name": "raid_bdev1", 00:10:59.729 "uuid": "cd1f2b82-a034-49ce-9459-1a821ad8565c", 00:10:59.729 "strip_size_kb": 64, 00:10:59.729 "state": "online", 00:10:59.729 "raid_level": "raid0", 00:10:59.729 "superblock": true, 00:10:59.729 "num_base_bdevs": 4, 00:10:59.729 "num_base_bdevs_discovered": 4, 00:10:59.729 "num_base_bdevs_operational": 4, 00:10:59.729 "base_bdevs_list": [ 00:10:59.729 { 00:10:59.729 "name": "BaseBdev1", 00:10:59.729 "uuid": "c15a2df7-52ff-52e1-bf89-b96e73957330", 00:10:59.729 "is_configured": true, 00:10:59.729 "data_offset": 2048, 00:10:59.729 "data_size": 63488 00:10:59.729 }, 00:10:59.729 { 00:10:59.729 "name": "BaseBdev2", 00:10:59.729 "uuid": "86a679a7-abc0-58c3-9316-1d55f69a2bfe", 00:10:59.729 "is_configured": true, 00:10:59.729 "data_offset": 2048, 00:10:59.729 "data_size": 63488 00:10:59.729 }, 00:10:59.729 { 00:10:59.729 "name": "BaseBdev3", 00:10:59.729 "uuid": "946c8035-6696-52bc-a16e-2f1fc9b7210a", 00:10:59.729 "is_configured": true, 00:10:59.729 "data_offset": 2048, 00:10:59.729 "data_size": 63488 00:10:59.729 }, 00:10:59.729 { 00:10:59.729 "name": "BaseBdev4", 00:10:59.729 "uuid": "e66797ab-96e4-567b-b19e-965b37a209f0", 00:10:59.729 "is_configured": true, 00:10:59.729 "data_offset": 2048, 00:10:59.729 "data_size": 63488 00:10:59.729 } 00:10:59.729 ] 00:10:59.729 }' 00:10:59.729 12:03:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.729 12:03:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.988 12:03:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:59.988 12:03:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.988 12:03:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.988 [2024-11-19 12:03:03.328775] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:59.988 [2024-11-19 12:03:03.328912] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:59.988 [2024-11-19 12:03:03.331477] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:59.988 [2024-11-19 12:03:03.331578] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:59.988 [2024-11-19 12:03:03.331639] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:59.988 [2024-11-19 12:03:03.331685] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:10:59.988 { 00:10:59.988 "results": [ 00:10:59.988 { 00:10:59.988 "job": "raid_bdev1", 00:10:59.988 "core_mask": "0x1", 00:10:59.988 "workload": "randrw", 00:10:59.988 "percentage": 50, 00:10:59.988 "status": "finished", 00:10:59.988 "queue_depth": 1, 00:10:59.988 "io_size": 131072, 00:10:59.988 "runtime": 1.363229, 00:10:59.988 "iops": 16102.210266947079, 00:10:59.988 "mibps": 2012.7762833683848, 00:10:59.988 "io_failed": 1, 00:10:59.988 "io_timeout": 0, 00:10:59.988 "avg_latency_us": 86.39534546195273, 00:10:59.988 "min_latency_us": 25.823580786026202, 00:10:59.988 "max_latency_us": 1416.6078602620087 00:10:59.988 } 00:10:59.988 ], 00:10:59.988 "core_count": 1 00:10:59.988 } 00:10:59.988 12:03:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.988 12:03:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71147 00:10:59.988 12:03:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 71147 ']' 00:10:59.988 12:03:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 71147 00:10:59.988 12:03:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:59.988 12:03:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:59.988 12:03:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71147 00:11:00.248 12:03:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:00.248 12:03:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:00.248 12:03:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71147' 00:11:00.248 killing process with pid 71147 00:11:00.248 12:03:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 71147 00:11:00.248 [2024-11-19 12:03:03.375694] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:00.248 12:03:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 71147 00:11:00.508 [2024-11-19 12:03:03.696671] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:01.913 12:03:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.BINZQBnctQ 00:11:01.913 12:03:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:01.913 12:03:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:01.913 12:03:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:11:01.913 12:03:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:01.913 12:03:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:01.913 12:03:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:01.913 ************************************ 00:11:01.913 END TEST raid_write_error_test 00:11:01.913 ************************************ 00:11:01.913 12:03:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:11:01.913 00:11:01.913 real 0m4.616s 00:11:01.913 user 0m5.428s 00:11:01.913 sys 0m0.591s 00:11:01.913 12:03:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:01.913 12:03:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.913 12:03:04 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:01.913 12:03:04 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:11:01.913 12:03:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:01.913 12:03:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:01.913 12:03:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:01.913 ************************************ 00:11:01.913 START TEST raid_state_function_test 00:11:01.913 ************************************ 00:11:01.913 12:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:11:01.913 12:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:01.913 12:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:01.913 12:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:01.913 12:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:01.913 12:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:01.913 12:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:01.913 12:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:01.913 12:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:01.913 12:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:01.913 12:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:01.913 12:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:01.913 12:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:01.913 12:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:01.913 12:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:01.913 12:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:01.913 12:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:01.914 12:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:01.914 12:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:01.914 12:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:01.914 12:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:01.914 12:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:01.914 Process raid pid: 71285 00:11:01.914 12:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:01.914 12:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:01.914 12:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:01.914 12:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:01.914 12:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:01.914 12:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:01.914 12:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:01.914 12:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:01.914 12:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71285 00:11:01.914 12:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71285' 00:11:01.914 12:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71285 00:11:01.914 12:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 71285 ']' 00:11:01.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:01.914 12:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:01.914 12:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:01.914 12:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:01.914 12:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:01.914 12:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.914 12:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:01.914 [2024-11-19 12:03:05.018609] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:11:01.914 [2024-11-19 12:03:05.018823] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:01.914 [2024-11-19 12:03:05.178382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:02.174 [2024-11-19 12:03:05.295226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.174 [2024-11-19 12:03:05.504033] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:02.174 [2024-11-19 12:03:05.504170] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:02.743 12:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:02.743 12:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:02.743 12:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:02.743 12:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.743 12:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.743 [2024-11-19 12:03:05.863889] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:02.743 [2024-11-19 12:03:05.863954] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:02.743 [2024-11-19 12:03:05.863967] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:02.743 [2024-11-19 12:03:05.863977] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:02.743 [2024-11-19 12:03:05.863984] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:02.743 [2024-11-19 12:03:05.864003] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:02.743 [2024-11-19 12:03:05.864011] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:02.743 [2024-11-19 12:03:05.864020] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:02.743 12:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.743 12:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:02.743 12:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:02.743 12:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:02.743 12:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:02.743 12:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:02.743 12:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:02.743 12:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.743 12:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.743 12:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.743 12:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.743 12:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.743 12:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:02.743 12:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.743 12:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.743 12:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.743 12:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.743 "name": "Existed_Raid", 00:11:02.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.743 "strip_size_kb": 64, 00:11:02.743 "state": "configuring", 00:11:02.743 "raid_level": "concat", 00:11:02.743 "superblock": false, 00:11:02.743 "num_base_bdevs": 4, 00:11:02.743 "num_base_bdevs_discovered": 0, 00:11:02.743 "num_base_bdevs_operational": 4, 00:11:02.743 "base_bdevs_list": [ 00:11:02.743 { 00:11:02.743 "name": "BaseBdev1", 00:11:02.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.743 "is_configured": false, 00:11:02.743 "data_offset": 0, 00:11:02.743 "data_size": 0 00:11:02.743 }, 00:11:02.743 { 00:11:02.743 "name": "BaseBdev2", 00:11:02.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.743 "is_configured": false, 00:11:02.743 "data_offset": 0, 00:11:02.743 "data_size": 0 00:11:02.743 }, 00:11:02.743 { 00:11:02.743 "name": "BaseBdev3", 00:11:02.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.743 "is_configured": false, 00:11:02.743 "data_offset": 0, 00:11:02.743 "data_size": 0 00:11:02.743 }, 00:11:02.743 { 00:11:02.744 "name": "BaseBdev4", 00:11:02.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.744 "is_configured": false, 00:11:02.744 "data_offset": 0, 00:11:02.744 "data_size": 0 00:11:02.744 } 00:11:02.744 ] 00:11:02.744 }' 00:11:02.744 12:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.744 12:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.002 12:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:03.002 12:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.002 12:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.002 [2024-11-19 12:03:06.311268] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:03.002 [2024-11-19 12:03:06.311376] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:03.002 12:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.002 12:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:03.002 12:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.002 12:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.002 [2024-11-19 12:03:06.323090] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:03.002 [2024-11-19 12:03:06.323178] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:03.002 [2024-11-19 12:03:06.323206] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:03.002 [2024-11-19 12:03:06.323228] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:03.002 [2024-11-19 12:03:06.323263] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:03.002 [2024-11-19 12:03:06.323304] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:03.002 [2024-11-19 12:03:06.323323] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:03.002 [2024-11-19 12:03:06.323356] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:03.002 12:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.002 12:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:03.002 12:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.002 12:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.002 [2024-11-19 12:03:06.367779] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:03.002 BaseBdev1 00:11:03.003 12:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.003 12:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:03.003 12:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:03.003 12:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:03.003 12:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:03.003 12:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:03.003 12:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:03.003 12:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:03.003 12:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.003 12:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.260 12:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.260 12:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:03.260 12:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.260 12:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.260 [ 00:11:03.260 { 00:11:03.260 "name": "BaseBdev1", 00:11:03.260 "aliases": [ 00:11:03.260 "c1eb40dc-f932-4e65-972d-ec73ae19480a" 00:11:03.260 ], 00:11:03.260 "product_name": "Malloc disk", 00:11:03.260 "block_size": 512, 00:11:03.260 "num_blocks": 65536, 00:11:03.260 "uuid": "c1eb40dc-f932-4e65-972d-ec73ae19480a", 00:11:03.260 "assigned_rate_limits": { 00:11:03.260 "rw_ios_per_sec": 0, 00:11:03.260 "rw_mbytes_per_sec": 0, 00:11:03.260 "r_mbytes_per_sec": 0, 00:11:03.260 "w_mbytes_per_sec": 0 00:11:03.260 }, 00:11:03.260 "claimed": true, 00:11:03.260 "claim_type": "exclusive_write", 00:11:03.260 "zoned": false, 00:11:03.260 "supported_io_types": { 00:11:03.260 "read": true, 00:11:03.260 "write": true, 00:11:03.260 "unmap": true, 00:11:03.260 "flush": true, 00:11:03.260 "reset": true, 00:11:03.260 "nvme_admin": false, 00:11:03.260 "nvme_io": false, 00:11:03.260 "nvme_io_md": false, 00:11:03.260 "write_zeroes": true, 00:11:03.260 "zcopy": true, 00:11:03.260 "get_zone_info": false, 00:11:03.260 "zone_management": false, 00:11:03.260 "zone_append": false, 00:11:03.260 "compare": false, 00:11:03.260 "compare_and_write": false, 00:11:03.260 "abort": true, 00:11:03.260 "seek_hole": false, 00:11:03.260 "seek_data": false, 00:11:03.260 "copy": true, 00:11:03.260 "nvme_iov_md": false 00:11:03.260 }, 00:11:03.260 "memory_domains": [ 00:11:03.260 { 00:11:03.260 "dma_device_id": "system", 00:11:03.260 "dma_device_type": 1 00:11:03.260 }, 00:11:03.260 { 00:11:03.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.260 "dma_device_type": 2 00:11:03.260 } 00:11:03.260 ], 00:11:03.260 "driver_specific": {} 00:11:03.260 } 00:11:03.260 ] 00:11:03.260 12:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.260 12:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:03.260 12:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:03.260 12:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:03.260 12:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:03.261 12:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:03.261 12:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:03.261 12:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:03.261 12:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.261 12:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.261 12:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.261 12:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.261 12:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.261 12:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:03.261 12:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.261 12:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.261 12:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.261 12:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.261 "name": "Existed_Raid", 00:11:03.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.261 "strip_size_kb": 64, 00:11:03.261 "state": "configuring", 00:11:03.261 "raid_level": "concat", 00:11:03.261 "superblock": false, 00:11:03.261 "num_base_bdevs": 4, 00:11:03.261 "num_base_bdevs_discovered": 1, 00:11:03.261 "num_base_bdevs_operational": 4, 00:11:03.261 "base_bdevs_list": [ 00:11:03.261 { 00:11:03.261 "name": "BaseBdev1", 00:11:03.261 "uuid": "c1eb40dc-f932-4e65-972d-ec73ae19480a", 00:11:03.261 "is_configured": true, 00:11:03.261 "data_offset": 0, 00:11:03.261 "data_size": 65536 00:11:03.261 }, 00:11:03.261 { 00:11:03.261 "name": "BaseBdev2", 00:11:03.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.261 "is_configured": false, 00:11:03.261 "data_offset": 0, 00:11:03.261 "data_size": 0 00:11:03.261 }, 00:11:03.261 { 00:11:03.261 "name": "BaseBdev3", 00:11:03.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.261 "is_configured": false, 00:11:03.261 "data_offset": 0, 00:11:03.261 "data_size": 0 00:11:03.261 }, 00:11:03.261 { 00:11:03.261 "name": "BaseBdev4", 00:11:03.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.261 "is_configured": false, 00:11:03.261 "data_offset": 0, 00:11:03.261 "data_size": 0 00:11:03.261 } 00:11:03.261 ] 00:11:03.261 }' 00:11:03.261 12:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.261 12:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.518 12:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:03.518 12:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.518 12:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.518 [2024-11-19 12:03:06.874963] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:03.518 [2024-11-19 12:03:06.875122] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:03.518 12:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.518 12:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:03.518 12:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.518 12:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.518 [2024-11-19 12:03:06.886986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:03.518 [2024-11-19 12:03:06.888855] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:03.518 [2024-11-19 12:03:06.888956] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:03.518 [2024-11-19 12:03:06.888985] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:03.518 [2024-11-19 12:03:06.889019] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:03.518 [2024-11-19 12:03:06.889039] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:03.518 [2024-11-19 12:03:06.889060] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:03.518 12:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.518 12:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:03.775 12:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:03.775 12:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:03.775 12:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:03.775 12:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:03.775 12:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:03.775 12:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:03.775 12:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:03.775 12:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.775 12:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.775 12:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.775 12:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.776 12:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.776 12:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.776 12:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.776 12:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:03.776 12:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.776 12:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.776 "name": "Existed_Raid", 00:11:03.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.776 "strip_size_kb": 64, 00:11:03.776 "state": "configuring", 00:11:03.776 "raid_level": "concat", 00:11:03.776 "superblock": false, 00:11:03.776 "num_base_bdevs": 4, 00:11:03.776 "num_base_bdevs_discovered": 1, 00:11:03.776 "num_base_bdevs_operational": 4, 00:11:03.776 "base_bdevs_list": [ 00:11:03.776 { 00:11:03.776 "name": "BaseBdev1", 00:11:03.776 "uuid": "c1eb40dc-f932-4e65-972d-ec73ae19480a", 00:11:03.776 "is_configured": true, 00:11:03.776 "data_offset": 0, 00:11:03.776 "data_size": 65536 00:11:03.776 }, 00:11:03.776 { 00:11:03.776 "name": "BaseBdev2", 00:11:03.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.776 "is_configured": false, 00:11:03.776 "data_offset": 0, 00:11:03.776 "data_size": 0 00:11:03.776 }, 00:11:03.776 { 00:11:03.776 "name": "BaseBdev3", 00:11:03.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.776 "is_configured": false, 00:11:03.776 "data_offset": 0, 00:11:03.776 "data_size": 0 00:11:03.776 }, 00:11:03.776 { 00:11:03.776 "name": "BaseBdev4", 00:11:03.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.776 "is_configured": false, 00:11:03.776 "data_offset": 0, 00:11:03.776 "data_size": 0 00:11:03.776 } 00:11:03.776 ] 00:11:03.776 }' 00:11:03.776 12:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.776 12:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.034 12:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:04.034 12:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.034 12:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.034 [2024-11-19 12:03:07.388750] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:04.034 BaseBdev2 00:11:04.034 12:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.034 12:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:04.034 12:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:04.034 12:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:04.034 12:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:04.034 12:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:04.034 12:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:04.034 12:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:04.034 12:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.034 12:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.034 12:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.034 12:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:04.034 12:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.034 12:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.293 [ 00:11:04.293 { 00:11:04.293 "name": "BaseBdev2", 00:11:04.293 "aliases": [ 00:11:04.293 "cf0fd44d-848e-47ce-b7c1-0b69ccf69f2e" 00:11:04.293 ], 00:11:04.293 "product_name": "Malloc disk", 00:11:04.293 "block_size": 512, 00:11:04.293 "num_blocks": 65536, 00:11:04.293 "uuid": "cf0fd44d-848e-47ce-b7c1-0b69ccf69f2e", 00:11:04.293 "assigned_rate_limits": { 00:11:04.293 "rw_ios_per_sec": 0, 00:11:04.293 "rw_mbytes_per_sec": 0, 00:11:04.293 "r_mbytes_per_sec": 0, 00:11:04.293 "w_mbytes_per_sec": 0 00:11:04.293 }, 00:11:04.293 "claimed": true, 00:11:04.293 "claim_type": "exclusive_write", 00:11:04.293 "zoned": false, 00:11:04.293 "supported_io_types": { 00:11:04.293 "read": true, 00:11:04.293 "write": true, 00:11:04.293 "unmap": true, 00:11:04.293 "flush": true, 00:11:04.293 "reset": true, 00:11:04.293 "nvme_admin": false, 00:11:04.293 "nvme_io": false, 00:11:04.293 "nvme_io_md": false, 00:11:04.293 "write_zeroes": true, 00:11:04.293 "zcopy": true, 00:11:04.293 "get_zone_info": false, 00:11:04.293 "zone_management": false, 00:11:04.293 "zone_append": false, 00:11:04.293 "compare": false, 00:11:04.293 "compare_and_write": false, 00:11:04.293 "abort": true, 00:11:04.293 "seek_hole": false, 00:11:04.293 "seek_data": false, 00:11:04.293 "copy": true, 00:11:04.293 "nvme_iov_md": false 00:11:04.293 }, 00:11:04.293 "memory_domains": [ 00:11:04.293 { 00:11:04.293 "dma_device_id": "system", 00:11:04.293 "dma_device_type": 1 00:11:04.293 }, 00:11:04.293 { 00:11:04.293 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.293 "dma_device_type": 2 00:11:04.293 } 00:11:04.293 ], 00:11:04.293 "driver_specific": {} 00:11:04.293 } 00:11:04.293 ] 00:11:04.293 12:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.293 12:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:04.293 12:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:04.294 12:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:04.294 12:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:04.294 12:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:04.294 12:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:04.294 12:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:04.294 12:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:04.294 12:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:04.294 12:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.294 12:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.294 12:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.294 12:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.294 12:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.294 12:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:04.294 12:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.294 12:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.294 12:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.294 12:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.294 "name": "Existed_Raid", 00:11:04.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.294 "strip_size_kb": 64, 00:11:04.294 "state": "configuring", 00:11:04.294 "raid_level": "concat", 00:11:04.294 "superblock": false, 00:11:04.294 "num_base_bdevs": 4, 00:11:04.294 "num_base_bdevs_discovered": 2, 00:11:04.294 "num_base_bdevs_operational": 4, 00:11:04.294 "base_bdevs_list": [ 00:11:04.294 { 00:11:04.294 "name": "BaseBdev1", 00:11:04.294 "uuid": "c1eb40dc-f932-4e65-972d-ec73ae19480a", 00:11:04.294 "is_configured": true, 00:11:04.294 "data_offset": 0, 00:11:04.294 "data_size": 65536 00:11:04.294 }, 00:11:04.294 { 00:11:04.294 "name": "BaseBdev2", 00:11:04.294 "uuid": "cf0fd44d-848e-47ce-b7c1-0b69ccf69f2e", 00:11:04.294 "is_configured": true, 00:11:04.294 "data_offset": 0, 00:11:04.294 "data_size": 65536 00:11:04.294 }, 00:11:04.294 { 00:11:04.294 "name": "BaseBdev3", 00:11:04.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.294 "is_configured": false, 00:11:04.294 "data_offset": 0, 00:11:04.294 "data_size": 0 00:11:04.294 }, 00:11:04.294 { 00:11:04.294 "name": "BaseBdev4", 00:11:04.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.294 "is_configured": false, 00:11:04.294 "data_offset": 0, 00:11:04.294 "data_size": 0 00:11:04.294 } 00:11:04.294 ] 00:11:04.294 }' 00:11:04.294 12:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.294 12:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.553 12:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:04.553 12:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.553 12:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.553 [2024-11-19 12:03:07.917455] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:04.553 BaseBdev3 00:11:04.553 12:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.553 12:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:04.553 12:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:04.553 12:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:04.553 12:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:04.553 12:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:04.553 12:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:04.553 12:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:04.553 12:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.553 12:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.812 12:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.812 12:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:04.812 12:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.812 12:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.812 [ 00:11:04.812 { 00:11:04.812 "name": "BaseBdev3", 00:11:04.812 "aliases": [ 00:11:04.812 "cd346f35-8cf0-4e62-9911-478e753c18dd" 00:11:04.812 ], 00:11:04.812 "product_name": "Malloc disk", 00:11:04.812 "block_size": 512, 00:11:04.812 "num_blocks": 65536, 00:11:04.812 "uuid": "cd346f35-8cf0-4e62-9911-478e753c18dd", 00:11:04.812 "assigned_rate_limits": { 00:11:04.812 "rw_ios_per_sec": 0, 00:11:04.812 "rw_mbytes_per_sec": 0, 00:11:04.812 "r_mbytes_per_sec": 0, 00:11:04.812 "w_mbytes_per_sec": 0 00:11:04.812 }, 00:11:04.812 "claimed": true, 00:11:04.812 "claim_type": "exclusive_write", 00:11:04.812 "zoned": false, 00:11:04.812 "supported_io_types": { 00:11:04.812 "read": true, 00:11:04.812 "write": true, 00:11:04.812 "unmap": true, 00:11:04.812 "flush": true, 00:11:04.812 "reset": true, 00:11:04.812 "nvme_admin": false, 00:11:04.812 "nvme_io": false, 00:11:04.812 "nvme_io_md": false, 00:11:04.812 "write_zeroes": true, 00:11:04.812 "zcopy": true, 00:11:04.812 "get_zone_info": false, 00:11:04.812 "zone_management": false, 00:11:04.812 "zone_append": false, 00:11:04.812 "compare": false, 00:11:04.812 "compare_and_write": false, 00:11:04.812 "abort": true, 00:11:04.812 "seek_hole": false, 00:11:04.812 "seek_data": false, 00:11:04.812 "copy": true, 00:11:04.812 "nvme_iov_md": false 00:11:04.812 }, 00:11:04.812 "memory_domains": [ 00:11:04.812 { 00:11:04.812 "dma_device_id": "system", 00:11:04.812 "dma_device_type": 1 00:11:04.812 }, 00:11:04.812 { 00:11:04.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.812 "dma_device_type": 2 00:11:04.812 } 00:11:04.812 ], 00:11:04.812 "driver_specific": {} 00:11:04.812 } 00:11:04.812 ] 00:11:04.812 12:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.812 12:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:04.812 12:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:04.812 12:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:04.812 12:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:04.812 12:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:04.812 12:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:04.812 12:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:04.812 12:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:04.812 12:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:04.812 12:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.812 12:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.812 12:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.812 12:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.812 12:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:04.812 12:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.812 12:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.812 12:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.812 12:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.812 12:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.812 "name": "Existed_Raid", 00:11:04.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.813 "strip_size_kb": 64, 00:11:04.813 "state": "configuring", 00:11:04.813 "raid_level": "concat", 00:11:04.813 "superblock": false, 00:11:04.813 "num_base_bdevs": 4, 00:11:04.813 "num_base_bdevs_discovered": 3, 00:11:04.813 "num_base_bdevs_operational": 4, 00:11:04.813 "base_bdevs_list": [ 00:11:04.813 { 00:11:04.813 "name": "BaseBdev1", 00:11:04.813 "uuid": "c1eb40dc-f932-4e65-972d-ec73ae19480a", 00:11:04.813 "is_configured": true, 00:11:04.813 "data_offset": 0, 00:11:04.813 "data_size": 65536 00:11:04.813 }, 00:11:04.813 { 00:11:04.813 "name": "BaseBdev2", 00:11:04.813 "uuid": "cf0fd44d-848e-47ce-b7c1-0b69ccf69f2e", 00:11:04.813 "is_configured": true, 00:11:04.813 "data_offset": 0, 00:11:04.813 "data_size": 65536 00:11:04.813 }, 00:11:04.813 { 00:11:04.813 "name": "BaseBdev3", 00:11:04.813 "uuid": "cd346f35-8cf0-4e62-9911-478e753c18dd", 00:11:04.813 "is_configured": true, 00:11:04.813 "data_offset": 0, 00:11:04.813 "data_size": 65536 00:11:04.813 }, 00:11:04.813 { 00:11:04.813 "name": "BaseBdev4", 00:11:04.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.813 "is_configured": false, 00:11:04.813 "data_offset": 0, 00:11:04.813 "data_size": 0 00:11:04.813 } 00:11:04.813 ] 00:11:04.813 }' 00:11:04.813 12:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.813 12:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.072 12:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:05.072 12:03:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.072 12:03:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.072 [2024-11-19 12:03:08.438304] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:05.072 [2024-11-19 12:03:08.438433] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:05.072 [2024-11-19 12:03:08.438458] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:05.072 [2024-11-19 12:03:08.438757] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:05.072 [2024-11-19 12:03:08.438932] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:05.072 [2024-11-19 12:03:08.438947] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:05.072 [2024-11-19 12:03:08.439230] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:05.072 BaseBdev4 00:11:05.072 12:03:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.072 12:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:05.072 12:03:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:05.072 12:03:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:05.072 12:03:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:05.072 12:03:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:05.072 12:03:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:05.072 12:03:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:05.072 12:03:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.072 12:03:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.330 12:03:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.330 12:03:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:05.330 12:03:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.330 12:03:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.330 [ 00:11:05.330 { 00:11:05.330 "name": "BaseBdev4", 00:11:05.330 "aliases": [ 00:11:05.330 "a9d7a5a6-a749-4379-aacd-f65f2ed1368a" 00:11:05.330 ], 00:11:05.330 "product_name": "Malloc disk", 00:11:05.330 "block_size": 512, 00:11:05.330 "num_blocks": 65536, 00:11:05.330 "uuid": "a9d7a5a6-a749-4379-aacd-f65f2ed1368a", 00:11:05.330 "assigned_rate_limits": { 00:11:05.330 "rw_ios_per_sec": 0, 00:11:05.330 "rw_mbytes_per_sec": 0, 00:11:05.330 "r_mbytes_per_sec": 0, 00:11:05.330 "w_mbytes_per_sec": 0 00:11:05.330 }, 00:11:05.330 "claimed": true, 00:11:05.330 "claim_type": "exclusive_write", 00:11:05.330 "zoned": false, 00:11:05.330 "supported_io_types": { 00:11:05.330 "read": true, 00:11:05.330 "write": true, 00:11:05.330 "unmap": true, 00:11:05.330 "flush": true, 00:11:05.330 "reset": true, 00:11:05.330 "nvme_admin": false, 00:11:05.330 "nvme_io": false, 00:11:05.330 "nvme_io_md": false, 00:11:05.330 "write_zeroes": true, 00:11:05.330 "zcopy": true, 00:11:05.330 "get_zone_info": false, 00:11:05.330 "zone_management": false, 00:11:05.330 "zone_append": false, 00:11:05.330 "compare": false, 00:11:05.330 "compare_and_write": false, 00:11:05.330 "abort": true, 00:11:05.330 "seek_hole": false, 00:11:05.330 "seek_data": false, 00:11:05.330 "copy": true, 00:11:05.330 "nvme_iov_md": false 00:11:05.330 }, 00:11:05.330 "memory_domains": [ 00:11:05.330 { 00:11:05.330 "dma_device_id": "system", 00:11:05.330 "dma_device_type": 1 00:11:05.330 }, 00:11:05.330 { 00:11:05.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.330 "dma_device_type": 2 00:11:05.330 } 00:11:05.330 ], 00:11:05.330 "driver_specific": {} 00:11:05.330 } 00:11:05.330 ] 00:11:05.330 12:03:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.330 12:03:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:05.330 12:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:05.330 12:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:05.330 12:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:05.330 12:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:05.330 12:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:05.330 12:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:05.331 12:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:05.331 12:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:05.331 12:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.331 12:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.331 12:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.331 12:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.331 12:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.331 12:03:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.331 12:03:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.331 12:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:05.331 12:03:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.331 12:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.331 "name": "Existed_Raid", 00:11:05.331 "uuid": "b51ed19e-ae47-421c-a4ef-0a03d63cf9bf", 00:11:05.331 "strip_size_kb": 64, 00:11:05.331 "state": "online", 00:11:05.331 "raid_level": "concat", 00:11:05.331 "superblock": false, 00:11:05.331 "num_base_bdevs": 4, 00:11:05.331 "num_base_bdevs_discovered": 4, 00:11:05.331 "num_base_bdevs_operational": 4, 00:11:05.331 "base_bdevs_list": [ 00:11:05.331 { 00:11:05.331 "name": "BaseBdev1", 00:11:05.331 "uuid": "c1eb40dc-f932-4e65-972d-ec73ae19480a", 00:11:05.331 "is_configured": true, 00:11:05.331 "data_offset": 0, 00:11:05.331 "data_size": 65536 00:11:05.331 }, 00:11:05.331 { 00:11:05.331 "name": "BaseBdev2", 00:11:05.331 "uuid": "cf0fd44d-848e-47ce-b7c1-0b69ccf69f2e", 00:11:05.331 "is_configured": true, 00:11:05.331 "data_offset": 0, 00:11:05.331 "data_size": 65536 00:11:05.331 }, 00:11:05.331 { 00:11:05.331 "name": "BaseBdev3", 00:11:05.331 "uuid": "cd346f35-8cf0-4e62-9911-478e753c18dd", 00:11:05.331 "is_configured": true, 00:11:05.331 "data_offset": 0, 00:11:05.331 "data_size": 65536 00:11:05.331 }, 00:11:05.331 { 00:11:05.331 "name": "BaseBdev4", 00:11:05.331 "uuid": "a9d7a5a6-a749-4379-aacd-f65f2ed1368a", 00:11:05.331 "is_configured": true, 00:11:05.331 "data_offset": 0, 00:11:05.331 "data_size": 65536 00:11:05.331 } 00:11:05.331 ] 00:11:05.331 }' 00:11:05.331 12:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.331 12:03:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.590 12:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:05.590 12:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:05.590 12:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:05.590 12:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:05.590 12:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:05.590 12:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:05.590 12:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:05.590 12:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:05.590 12:03:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.590 12:03:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.590 [2024-11-19 12:03:08.945802] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:05.590 12:03:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.849 12:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:05.849 "name": "Existed_Raid", 00:11:05.849 "aliases": [ 00:11:05.849 "b51ed19e-ae47-421c-a4ef-0a03d63cf9bf" 00:11:05.849 ], 00:11:05.849 "product_name": "Raid Volume", 00:11:05.849 "block_size": 512, 00:11:05.849 "num_blocks": 262144, 00:11:05.849 "uuid": "b51ed19e-ae47-421c-a4ef-0a03d63cf9bf", 00:11:05.849 "assigned_rate_limits": { 00:11:05.849 "rw_ios_per_sec": 0, 00:11:05.849 "rw_mbytes_per_sec": 0, 00:11:05.849 "r_mbytes_per_sec": 0, 00:11:05.849 "w_mbytes_per_sec": 0 00:11:05.849 }, 00:11:05.849 "claimed": false, 00:11:05.849 "zoned": false, 00:11:05.849 "supported_io_types": { 00:11:05.849 "read": true, 00:11:05.849 "write": true, 00:11:05.849 "unmap": true, 00:11:05.849 "flush": true, 00:11:05.849 "reset": true, 00:11:05.849 "nvme_admin": false, 00:11:05.849 "nvme_io": false, 00:11:05.849 "nvme_io_md": false, 00:11:05.849 "write_zeroes": true, 00:11:05.849 "zcopy": false, 00:11:05.849 "get_zone_info": false, 00:11:05.849 "zone_management": false, 00:11:05.849 "zone_append": false, 00:11:05.849 "compare": false, 00:11:05.849 "compare_and_write": false, 00:11:05.849 "abort": false, 00:11:05.849 "seek_hole": false, 00:11:05.849 "seek_data": false, 00:11:05.849 "copy": false, 00:11:05.849 "nvme_iov_md": false 00:11:05.849 }, 00:11:05.849 "memory_domains": [ 00:11:05.849 { 00:11:05.849 "dma_device_id": "system", 00:11:05.849 "dma_device_type": 1 00:11:05.849 }, 00:11:05.849 { 00:11:05.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.849 "dma_device_type": 2 00:11:05.849 }, 00:11:05.849 { 00:11:05.849 "dma_device_id": "system", 00:11:05.849 "dma_device_type": 1 00:11:05.849 }, 00:11:05.849 { 00:11:05.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.849 "dma_device_type": 2 00:11:05.849 }, 00:11:05.849 { 00:11:05.849 "dma_device_id": "system", 00:11:05.849 "dma_device_type": 1 00:11:05.849 }, 00:11:05.849 { 00:11:05.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.850 "dma_device_type": 2 00:11:05.850 }, 00:11:05.850 { 00:11:05.850 "dma_device_id": "system", 00:11:05.850 "dma_device_type": 1 00:11:05.850 }, 00:11:05.850 { 00:11:05.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.850 "dma_device_type": 2 00:11:05.850 } 00:11:05.850 ], 00:11:05.850 "driver_specific": { 00:11:05.850 "raid": { 00:11:05.850 "uuid": "b51ed19e-ae47-421c-a4ef-0a03d63cf9bf", 00:11:05.850 "strip_size_kb": 64, 00:11:05.850 "state": "online", 00:11:05.850 "raid_level": "concat", 00:11:05.850 "superblock": false, 00:11:05.850 "num_base_bdevs": 4, 00:11:05.850 "num_base_bdevs_discovered": 4, 00:11:05.850 "num_base_bdevs_operational": 4, 00:11:05.850 "base_bdevs_list": [ 00:11:05.850 { 00:11:05.850 "name": "BaseBdev1", 00:11:05.850 "uuid": "c1eb40dc-f932-4e65-972d-ec73ae19480a", 00:11:05.850 "is_configured": true, 00:11:05.850 "data_offset": 0, 00:11:05.850 "data_size": 65536 00:11:05.850 }, 00:11:05.850 { 00:11:05.850 "name": "BaseBdev2", 00:11:05.850 "uuid": "cf0fd44d-848e-47ce-b7c1-0b69ccf69f2e", 00:11:05.850 "is_configured": true, 00:11:05.850 "data_offset": 0, 00:11:05.850 "data_size": 65536 00:11:05.850 }, 00:11:05.850 { 00:11:05.850 "name": "BaseBdev3", 00:11:05.850 "uuid": "cd346f35-8cf0-4e62-9911-478e753c18dd", 00:11:05.850 "is_configured": true, 00:11:05.850 "data_offset": 0, 00:11:05.850 "data_size": 65536 00:11:05.850 }, 00:11:05.850 { 00:11:05.850 "name": "BaseBdev4", 00:11:05.850 "uuid": "a9d7a5a6-a749-4379-aacd-f65f2ed1368a", 00:11:05.850 "is_configured": true, 00:11:05.850 "data_offset": 0, 00:11:05.850 "data_size": 65536 00:11:05.850 } 00:11:05.850 ] 00:11:05.850 } 00:11:05.850 } 00:11:05.850 }' 00:11:05.850 12:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:05.850 12:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:05.850 BaseBdev2 00:11:05.850 BaseBdev3 00:11:05.850 BaseBdev4' 00:11:05.850 12:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.850 12:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:05.850 12:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:05.850 12:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:05.850 12:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.850 12:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.850 12:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.850 12:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.850 12:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:05.850 12:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:05.850 12:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:05.850 12:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.850 12:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:05.850 12:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.850 12:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.850 12:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.850 12:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:05.850 12:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:05.850 12:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:05.850 12:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:05.850 12:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.850 12:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.850 12:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.850 12:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.850 12:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:05.850 12:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:05.850 12:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:05.850 12:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:05.850 12:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.850 12:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.850 12:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.110 12:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.110 12:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:06.110 12:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:06.110 12:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:06.110 12:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.110 12:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.110 [2024-11-19 12:03:09.245081] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:06.110 [2024-11-19 12:03:09.245161] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:06.110 [2024-11-19 12:03:09.245231] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:06.110 12:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.110 12:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:06.110 12:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:06.110 12:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:06.110 12:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:06.110 12:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:06.110 12:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:11:06.110 12:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:06.110 12:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:06.110 12:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:06.110 12:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:06.110 12:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:06.110 12:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.110 12:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.110 12:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.110 12:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.110 12:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.110 12:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.110 12:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.110 12:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:06.110 12:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.110 12:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.110 "name": "Existed_Raid", 00:11:06.110 "uuid": "b51ed19e-ae47-421c-a4ef-0a03d63cf9bf", 00:11:06.110 "strip_size_kb": 64, 00:11:06.110 "state": "offline", 00:11:06.110 "raid_level": "concat", 00:11:06.110 "superblock": false, 00:11:06.110 "num_base_bdevs": 4, 00:11:06.110 "num_base_bdevs_discovered": 3, 00:11:06.110 "num_base_bdevs_operational": 3, 00:11:06.110 "base_bdevs_list": [ 00:11:06.110 { 00:11:06.110 "name": null, 00:11:06.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.110 "is_configured": false, 00:11:06.110 "data_offset": 0, 00:11:06.110 "data_size": 65536 00:11:06.110 }, 00:11:06.110 { 00:11:06.110 "name": "BaseBdev2", 00:11:06.110 "uuid": "cf0fd44d-848e-47ce-b7c1-0b69ccf69f2e", 00:11:06.110 "is_configured": true, 00:11:06.110 "data_offset": 0, 00:11:06.110 "data_size": 65536 00:11:06.110 }, 00:11:06.110 { 00:11:06.110 "name": "BaseBdev3", 00:11:06.110 "uuid": "cd346f35-8cf0-4e62-9911-478e753c18dd", 00:11:06.110 "is_configured": true, 00:11:06.110 "data_offset": 0, 00:11:06.110 "data_size": 65536 00:11:06.110 }, 00:11:06.110 { 00:11:06.110 "name": "BaseBdev4", 00:11:06.110 "uuid": "a9d7a5a6-a749-4379-aacd-f65f2ed1368a", 00:11:06.110 "is_configured": true, 00:11:06.110 "data_offset": 0, 00:11:06.110 "data_size": 65536 00:11:06.110 } 00:11:06.110 ] 00:11:06.110 }' 00:11:06.110 12:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.110 12:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.679 12:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:06.679 12:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:06.679 12:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.679 12:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:06.679 12:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.679 12:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.679 12:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.679 12:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:06.679 12:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:06.679 12:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:06.679 12:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.679 12:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.679 [2024-11-19 12:03:09.865942] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:06.679 12:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.679 12:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:06.679 12:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:06.679 12:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.679 12:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.680 12:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.680 12:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:06.680 12:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.680 12:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:06.680 12:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:06.680 12:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:06.680 12:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.680 12:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.680 [2024-11-19 12:03:10.014744] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:06.939 12:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.939 12:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:06.939 12:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:06.939 12:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.939 12:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.939 12:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.939 12:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:06.939 12:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.939 12:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:06.939 12:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:06.939 12:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:06.939 12:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.939 12:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.939 [2024-11-19 12:03:10.165800] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:06.939 [2024-11-19 12:03:10.165937] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:06.940 12:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.940 12:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:06.940 12:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:06.940 12:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.940 12:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.940 12:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.940 12:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:06.940 12:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.940 12:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:06.940 12:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:06.940 12:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:06.940 12:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:06.940 12:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:06.940 12:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:06.940 12:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.940 12:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.200 BaseBdev2 00:11:07.200 12:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.200 12:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:07.200 12:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:07.200 12:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:07.200 12:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:07.200 12:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:07.200 12:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:07.200 12:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:07.200 12:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.200 12:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.200 12:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.200 12:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:07.200 12:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.200 12:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.200 [ 00:11:07.200 { 00:11:07.200 "name": "BaseBdev2", 00:11:07.200 "aliases": [ 00:11:07.200 "8332aa9b-6a39-4869-a370-6f5b391bf616" 00:11:07.200 ], 00:11:07.200 "product_name": "Malloc disk", 00:11:07.200 "block_size": 512, 00:11:07.200 "num_blocks": 65536, 00:11:07.200 "uuid": "8332aa9b-6a39-4869-a370-6f5b391bf616", 00:11:07.200 "assigned_rate_limits": { 00:11:07.200 "rw_ios_per_sec": 0, 00:11:07.200 "rw_mbytes_per_sec": 0, 00:11:07.200 "r_mbytes_per_sec": 0, 00:11:07.200 "w_mbytes_per_sec": 0 00:11:07.200 }, 00:11:07.200 "claimed": false, 00:11:07.200 "zoned": false, 00:11:07.200 "supported_io_types": { 00:11:07.200 "read": true, 00:11:07.200 "write": true, 00:11:07.200 "unmap": true, 00:11:07.200 "flush": true, 00:11:07.200 "reset": true, 00:11:07.200 "nvme_admin": false, 00:11:07.200 "nvme_io": false, 00:11:07.200 "nvme_io_md": false, 00:11:07.200 "write_zeroes": true, 00:11:07.200 "zcopy": true, 00:11:07.200 "get_zone_info": false, 00:11:07.200 "zone_management": false, 00:11:07.200 "zone_append": false, 00:11:07.200 "compare": false, 00:11:07.200 "compare_and_write": false, 00:11:07.200 "abort": true, 00:11:07.200 "seek_hole": false, 00:11:07.200 "seek_data": false, 00:11:07.200 "copy": true, 00:11:07.200 "nvme_iov_md": false 00:11:07.200 }, 00:11:07.200 "memory_domains": [ 00:11:07.200 { 00:11:07.200 "dma_device_id": "system", 00:11:07.200 "dma_device_type": 1 00:11:07.200 }, 00:11:07.200 { 00:11:07.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.200 "dma_device_type": 2 00:11:07.200 } 00:11:07.200 ], 00:11:07.200 "driver_specific": {} 00:11:07.200 } 00:11:07.200 ] 00:11:07.200 12:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.200 12:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:07.200 12:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:07.200 12:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:07.200 12:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:07.200 12:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.200 12:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.200 BaseBdev3 00:11:07.200 12:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.200 12:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:07.201 12:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:07.201 12:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:07.201 12:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:07.201 12:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:07.201 12:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:07.201 12:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:07.201 12:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.201 12:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.201 12:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.201 12:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:07.201 12:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.201 12:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.201 [ 00:11:07.201 { 00:11:07.201 "name": "BaseBdev3", 00:11:07.201 "aliases": [ 00:11:07.201 "9f7ce88d-e9df-45d0-8d23-47fcc8757871" 00:11:07.201 ], 00:11:07.201 "product_name": "Malloc disk", 00:11:07.201 "block_size": 512, 00:11:07.201 "num_blocks": 65536, 00:11:07.201 "uuid": "9f7ce88d-e9df-45d0-8d23-47fcc8757871", 00:11:07.201 "assigned_rate_limits": { 00:11:07.201 "rw_ios_per_sec": 0, 00:11:07.201 "rw_mbytes_per_sec": 0, 00:11:07.201 "r_mbytes_per_sec": 0, 00:11:07.201 "w_mbytes_per_sec": 0 00:11:07.201 }, 00:11:07.201 "claimed": false, 00:11:07.201 "zoned": false, 00:11:07.201 "supported_io_types": { 00:11:07.201 "read": true, 00:11:07.201 "write": true, 00:11:07.201 "unmap": true, 00:11:07.201 "flush": true, 00:11:07.201 "reset": true, 00:11:07.201 "nvme_admin": false, 00:11:07.201 "nvme_io": false, 00:11:07.201 "nvme_io_md": false, 00:11:07.201 "write_zeroes": true, 00:11:07.201 "zcopy": true, 00:11:07.201 "get_zone_info": false, 00:11:07.201 "zone_management": false, 00:11:07.201 "zone_append": false, 00:11:07.201 "compare": false, 00:11:07.201 "compare_and_write": false, 00:11:07.201 "abort": true, 00:11:07.201 "seek_hole": false, 00:11:07.201 "seek_data": false, 00:11:07.201 "copy": true, 00:11:07.201 "nvme_iov_md": false 00:11:07.201 }, 00:11:07.201 "memory_domains": [ 00:11:07.201 { 00:11:07.201 "dma_device_id": "system", 00:11:07.201 "dma_device_type": 1 00:11:07.201 }, 00:11:07.201 { 00:11:07.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.201 "dma_device_type": 2 00:11:07.201 } 00:11:07.201 ], 00:11:07.201 "driver_specific": {} 00:11:07.201 } 00:11:07.201 ] 00:11:07.201 12:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.201 12:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:07.201 12:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:07.201 12:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:07.201 12:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:07.201 12:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.201 12:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.201 BaseBdev4 00:11:07.201 12:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.201 12:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:07.201 12:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:07.201 12:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:07.201 12:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:07.201 12:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:07.201 12:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:07.201 12:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:07.201 12:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.201 12:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.201 12:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.201 12:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:07.201 12:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.201 12:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.201 [ 00:11:07.201 { 00:11:07.201 "name": "BaseBdev4", 00:11:07.201 "aliases": [ 00:11:07.201 "8c7db0e8-d7d9-4b70-8bbc-9210526ceadf" 00:11:07.201 ], 00:11:07.201 "product_name": "Malloc disk", 00:11:07.201 "block_size": 512, 00:11:07.201 "num_blocks": 65536, 00:11:07.201 "uuid": "8c7db0e8-d7d9-4b70-8bbc-9210526ceadf", 00:11:07.201 "assigned_rate_limits": { 00:11:07.201 "rw_ios_per_sec": 0, 00:11:07.201 "rw_mbytes_per_sec": 0, 00:11:07.201 "r_mbytes_per_sec": 0, 00:11:07.201 "w_mbytes_per_sec": 0 00:11:07.201 }, 00:11:07.201 "claimed": false, 00:11:07.201 "zoned": false, 00:11:07.201 "supported_io_types": { 00:11:07.201 "read": true, 00:11:07.201 "write": true, 00:11:07.201 "unmap": true, 00:11:07.201 "flush": true, 00:11:07.201 "reset": true, 00:11:07.201 "nvme_admin": false, 00:11:07.201 "nvme_io": false, 00:11:07.201 "nvme_io_md": false, 00:11:07.201 "write_zeroes": true, 00:11:07.201 "zcopy": true, 00:11:07.201 "get_zone_info": false, 00:11:07.201 "zone_management": false, 00:11:07.201 "zone_append": false, 00:11:07.201 "compare": false, 00:11:07.201 "compare_and_write": false, 00:11:07.201 "abort": true, 00:11:07.201 "seek_hole": false, 00:11:07.201 "seek_data": false, 00:11:07.201 "copy": true, 00:11:07.201 "nvme_iov_md": false 00:11:07.201 }, 00:11:07.201 "memory_domains": [ 00:11:07.201 { 00:11:07.201 "dma_device_id": "system", 00:11:07.201 "dma_device_type": 1 00:11:07.201 }, 00:11:07.201 { 00:11:07.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.201 "dma_device_type": 2 00:11:07.201 } 00:11:07.201 ], 00:11:07.201 "driver_specific": {} 00:11:07.201 } 00:11:07.201 ] 00:11:07.201 12:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.201 12:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:07.201 12:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:07.201 12:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:07.201 12:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:07.201 12:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.201 12:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.201 [2024-11-19 12:03:10.541954] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:07.201 [2024-11-19 12:03:10.542094] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:07.201 [2024-11-19 12:03:10.542155] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:07.201 [2024-11-19 12:03:10.543952] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:07.201 [2024-11-19 12:03:10.544062] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:07.201 12:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.201 12:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:07.201 12:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:07.201 12:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:07.201 12:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:07.201 12:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:07.201 12:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:07.201 12:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.201 12:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.201 12:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.201 12:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.201 12:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.201 12:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.201 12:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.201 12:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:07.201 12:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.461 12:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.461 "name": "Existed_Raid", 00:11:07.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.461 "strip_size_kb": 64, 00:11:07.461 "state": "configuring", 00:11:07.461 "raid_level": "concat", 00:11:07.461 "superblock": false, 00:11:07.461 "num_base_bdevs": 4, 00:11:07.461 "num_base_bdevs_discovered": 3, 00:11:07.461 "num_base_bdevs_operational": 4, 00:11:07.461 "base_bdevs_list": [ 00:11:07.461 { 00:11:07.461 "name": "BaseBdev1", 00:11:07.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.461 "is_configured": false, 00:11:07.461 "data_offset": 0, 00:11:07.461 "data_size": 0 00:11:07.461 }, 00:11:07.461 { 00:11:07.461 "name": "BaseBdev2", 00:11:07.461 "uuid": "8332aa9b-6a39-4869-a370-6f5b391bf616", 00:11:07.461 "is_configured": true, 00:11:07.461 "data_offset": 0, 00:11:07.461 "data_size": 65536 00:11:07.461 }, 00:11:07.461 { 00:11:07.461 "name": "BaseBdev3", 00:11:07.461 "uuid": "9f7ce88d-e9df-45d0-8d23-47fcc8757871", 00:11:07.461 "is_configured": true, 00:11:07.461 "data_offset": 0, 00:11:07.461 "data_size": 65536 00:11:07.461 }, 00:11:07.461 { 00:11:07.461 "name": "BaseBdev4", 00:11:07.461 "uuid": "8c7db0e8-d7d9-4b70-8bbc-9210526ceadf", 00:11:07.461 "is_configured": true, 00:11:07.461 "data_offset": 0, 00:11:07.461 "data_size": 65536 00:11:07.461 } 00:11:07.461 ] 00:11:07.461 }' 00:11:07.461 12:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.461 12:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.721 12:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:07.721 12:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.721 12:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.721 [2024-11-19 12:03:10.993192] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:07.721 12:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.721 12:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:07.721 12:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:07.721 12:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:07.721 12:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:07.721 12:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:07.721 12:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:07.721 12:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.721 12:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.721 12:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.721 12:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.721 12:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:07.721 12:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.721 12:03:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.721 12:03:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.721 12:03:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.721 12:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.721 "name": "Existed_Raid", 00:11:07.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.721 "strip_size_kb": 64, 00:11:07.721 "state": "configuring", 00:11:07.721 "raid_level": "concat", 00:11:07.721 "superblock": false, 00:11:07.721 "num_base_bdevs": 4, 00:11:07.721 "num_base_bdevs_discovered": 2, 00:11:07.721 "num_base_bdevs_operational": 4, 00:11:07.721 "base_bdevs_list": [ 00:11:07.721 { 00:11:07.721 "name": "BaseBdev1", 00:11:07.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.721 "is_configured": false, 00:11:07.721 "data_offset": 0, 00:11:07.721 "data_size": 0 00:11:07.721 }, 00:11:07.721 { 00:11:07.721 "name": null, 00:11:07.721 "uuid": "8332aa9b-6a39-4869-a370-6f5b391bf616", 00:11:07.721 "is_configured": false, 00:11:07.721 "data_offset": 0, 00:11:07.721 "data_size": 65536 00:11:07.721 }, 00:11:07.721 { 00:11:07.721 "name": "BaseBdev3", 00:11:07.721 "uuid": "9f7ce88d-e9df-45d0-8d23-47fcc8757871", 00:11:07.721 "is_configured": true, 00:11:07.721 "data_offset": 0, 00:11:07.721 "data_size": 65536 00:11:07.721 }, 00:11:07.721 { 00:11:07.721 "name": "BaseBdev4", 00:11:07.721 "uuid": "8c7db0e8-d7d9-4b70-8bbc-9210526ceadf", 00:11:07.721 "is_configured": true, 00:11:07.721 "data_offset": 0, 00:11:07.721 "data_size": 65536 00:11:07.721 } 00:11:07.721 ] 00:11:07.721 }' 00:11:07.721 12:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.721 12:03:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.291 12:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.291 12:03:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.291 12:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:08.291 12:03:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.291 12:03:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.291 12:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:08.291 12:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:08.291 12:03:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.291 12:03:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.291 [2024-11-19 12:03:11.505751] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:08.291 BaseBdev1 00:11:08.291 12:03:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.291 12:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:08.291 12:03:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:08.291 12:03:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:08.291 12:03:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:08.291 12:03:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:08.291 12:03:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:08.291 12:03:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:08.291 12:03:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.291 12:03:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.291 12:03:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.291 12:03:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:08.291 12:03:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.291 12:03:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.291 [ 00:11:08.291 { 00:11:08.291 "name": "BaseBdev1", 00:11:08.291 "aliases": [ 00:11:08.291 "594572c8-499b-4c49-a968-504158971255" 00:11:08.291 ], 00:11:08.291 "product_name": "Malloc disk", 00:11:08.291 "block_size": 512, 00:11:08.291 "num_blocks": 65536, 00:11:08.291 "uuid": "594572c8-499b-4c49-a968-504158971255", 00:11:08.291 "assigned_rate_limits": { 00:11:08.291 "rw_ios_per_sec": 0, 00:11:08.291 "rw_mbytes_per_sec": 0, 00:11:08.291 "r_mbytes_per_sec": 0, 00:11:08.291 "w_mbytes_per_sec": 0 00:11:08.291 }, 00:11:08.291 "claimed": true, 00:11:08.291 "claim_type": "exclusive_write", 00:11:08.291 "zoned": false, 00:11:08.291 "supported_io_types": { 00:11:08.291 "read": true, 00:11:08.291 "write": true, 00:11:08.291 "unmap": true, 00:11:08.291 "flush": true, 00:11:08.291 "reset": true, 00:11:08.291 "nvme_admin": false, 00:11:08.291 "nvme_io": false, 00:11:08.291 "nvme_io_md": false, 00:11:08.291 "write_zeroes": true, 00:11:08.291 "zcopy": true, 00:11:08.291 "get_zone_info": false, 00:11:08.291 "zone_management": false, 00:11:08.291 "zone_append": false, 00:11:08.291 "compare": false, 00:11:08.291 "compare_and_write": false, 00:11:08.291 "abort": true, 00:11:08.291 "seek_hole": false, 00:11:08.291 "seek_data": false, 00:11:08.291 "copy": true, 00:11:08.291 "nvme_iov_md": false 00:11:08.291 }, 00:11:08.291 "memory_domains": [ 00:11:08.291 { 00:11:08.291 "dma_device_id": "system", 00:11:08.291 "dma_device_type": 1 00:11:08.291 }, 00:11:08.291 { 00:11:08.291 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:08.291 "dma_device_type": 2 00:11:08.291 } 00:11:08.291 ], 00:11:08.291 "driver_specific": {} 00:11:08.291 } 00:11:08.291 ] 00:11:08.291 12:03:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.291 12:03:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:08.291 12:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:08.291 12:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:08.291 12:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:08.291 12:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:08.291 12:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:08.291 12:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:08.291 12:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.291 12:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.291 12:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.291 12:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.291 12:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.291 12:03:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.291 12:03:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.291 12:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:08.291 12:03:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.291 12:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.291 "name": "Existed_Raid", 00:11:08.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.291 "strip_size_kb": 64, 00:11:08.291 "state": "configuring", 00:11:08.291 "raid_level": "concat", 00:11:08.291 "superblock": false, 00:11:08.291 "num_base_bdevs": 4, 00:11:08.291 "num_base_bdevs_discovered": 3, 00:11:08.291 "num_base_bdevs_operational": 4, 00:11:08.291 "base_bdevs_list": [ 00:11:08.291 { 00:11:08.291 "name": "BaseBdev1", 00:11:08.291 "uuid": "594572c8-499b-4c49-a968-504158971255", 00:11:08.291 "is_configured": true, 00:11:08.291 "data_offset": 0, 00:11:08.291 "data_size": 65536 00:11:08.292 }, 00:11:08.292 { 00:11:08.292 "name": null, 00:11:08.292 "uuid": "8332aa9b-6a39-4869-a370-6f5b391bf616", 00:11:08.292 "is_configured": false, 00:11:08.292 "data_offset": 0, 00:11:08.292 "data_size": 65536 00:11:08.292 }, 00:11:08.292 { 00:11:08.292 "name": "BaseBdev3", 00:11:08.292 "uuid": "9f7ce88d-e9df-45d0-8d23-47fcc8757871", 00:11:08.292 "is_configured": true, 00:11:08.292 "data_offset": 0, 00:11:08.292 "data_size": 65536 00:11:08.292 }, 00:11:08.292 { 00:11:08.292 "name": "BaseBdev4", 00:11:08.292 "uuid": "8c7db0e8-d7d9-4b70-8bbc-9210526ceadf", 00:11:08.292 "is_configured": true, 00:11:08.292 "data_offset": 0, 00:11:08.292 "data_size": 65536 00:11:08.292 } 00:11:08.292 ] 00:11:08.292 }' 00:11:08.292 12:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.292 12:03:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.862 12:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:08.862 12:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.862 12:03:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.862 12:03:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.862 12:03:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.862 12:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:08.862 12:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:08.862 12:03:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.862 12:03:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.862 [2024-11-19 12:03:12.044929] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:08.862 12:03:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.862 12:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:08.862 12:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:08.862 12:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:08.862 12:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:08.862 12:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:08.862 12:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:08.862 12:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.862 12:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.862 12:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.862 12:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.862 12:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.862 12:03:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.862 12:03:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.862 12:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:08.862 12:03:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.862 12:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.862 "name": "Existed_Raid", 00:11:08.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.862 "strip_size_kb": 64, 00:11:08.862 "state": "configuring", 00:11:08.862 "raid_level": "concat", 00:11:08.862 "superblock": false, 00:11:08.862 "num_base_bdevs": 4, 00:11:08.862 "num_base_bdevs_discovered": 2, 00:11:08.862 "num_base_bdevs_operational": 4, 00:11:08.862 "base_bdevs_list": [ 00:11:08.862 { 00:11:08.862 "name": "BaseBdev1", 00:11:08.862 "uuid": "594572c8-499b-4c49-a968-504158971255", 00:11:08.862 "is_configured": true, 00:11:08.862 "data_offset": 0, 00:11:08.862 "data_size": 65536 00:11:08.862 }, 00:11:08.862 { 00:11:08.862 "name": null, 00:11:08.862 "uuid": "8332aa9b-6a39-4869-a370-6f5b391bf616", 00:11:08.862 "is_configured": false, 00:11:08.862 "data_offset": 0, 00:11:08.862 "data_size": 65536 00:11:08.862 }, 00:11:08.862 { 00:11:08.862 "name": null, 00:11:08.862 "uuid": "9f7ce88d-e9df-45d0-8d23-47fcc8757871", 00:11:08.862 "is_configured": false, 00:11:08.862 "data_offset": 0, 00:11:08.862 "data_size": 65536 00:11:08.862 }, 00:11:08.862 { 00:11:08.862 "name": "BaseBdev4", 00:11:08.862 "uuid": "8c7db0e8-d7d9-4b70-8bbc-9210526ceadf", 00:11:08.862 "is_configured": true, 00:11:08.862 "data_offset": 0, 00:11:08.862 "data_size": 65536 00:11:08.862 } 00:11:08.862 ] 00:11:08.862 }' 00:11:08.862 12:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.862 12:03:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.135 12:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:09.135 12:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.135 12:03:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.135 12:03:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.395 12:03:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.395 12:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:09.395 12:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:09.395 12:03:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.395 12:03:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.395 [2024-11-19 12:03:12.536065] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:09.395 12:03:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.395 12:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:09.395 12:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:09.395 12:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:09.395 12:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:09.395 12:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:09.395 12:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:09.395 12:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.395 12:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.395 12:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.395 12:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.395 12:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.395 12:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:09.395 12:03:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.395 12:03:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.395 12:03:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.395 12:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.395 "name": "Existed_Raid", 00:11:09.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.395 "strip_size_kb": 64, 00:11:09.395 "state": "configuring", 00:11:09.395 "raid_level": "concat", 00:11:09.395 "superblock": false, 00:11:09.395 "num_base_bdevs": 4, 00:11:09.395 "num_base_bdevs_discovered": 3, 00:11:09.395 "num_base_bdevs_operational": 4, 00:11:09.395 "base_bdevs_list": [ 00:11:09.395 { 00:11:09.395 "name": "BaseBdev1", 00:11:09.395 "uuid": "594572c8-499b-4c49-a968-504158971255", 00:11:09.395 "is_configured": true, 00:11:09.395 "data_offset": 0, 00:11:09.396 "data_size": 65536 00:11:09.396 }, 00:11:09.396 { 00:11:09.396 "name": null, 00:11:09.396 "uuid": "8332aa9b-6a39-4869-a370-6f5b391bf616", 00:11:09.396 "is_configured": false, 00:11:09.396 "data_offset": 0, 00:11:09.396 "data_size": 65536 00:11:09.396 }, 00:11:09.396 { 00:11:09.396 "name": "BaseBdev3", 00:11:09.396 "uuid": "9f7ce88d-e9df-45d0-8d23-47fcc8757871", 00:11:09.396 "is_configured": true, 00:11:09.396 "data_offset": 0, 00:11:09.396 "data_size": 65536 00:11:09.396 }, 00:11:09.396 { 00:11:09.396 "name": "BaseBdev4", 00:11:09.396 "uuid": "8c7db0e8-d7d9-4b70-8bbc-9210526ceadf", 00:11:09.396 "is_configured": true, 00:11:09.396 "data_offset": 0, 00:11:09.396 "data_size": 65536 00:11:09.396 } 00:11:09.396 ] 00:11:09.396 }' 00:11:09.396 12:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.396 12:03:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.655 12:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.655 12:03:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.655 12:03:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.655 12:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:09.655 12:03:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.655 12:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:09.655 12:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:09.655 12:03:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.655 12:03:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.655 [2024-11-19 12:03:13.023280] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:09.915 12:03:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.915 12:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:09.915 12:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:09.915 12:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:09.915 12:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:09.915 12:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:09.915 12:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:09.915 12:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.915 12:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.915 12:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.915 12:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.915 12:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.915 12:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:09.915 12:03:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.915 12:03:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.915 12:03:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.915 12:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.915 "name": "Existed_Raid", 00:11:09.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.915 "strip_size_kb": 64, 00:11:09.915 "state": "configuring", 00:11:09.915 "raid_level": "concat", 00:11:09.915 "superblock": false, 00:11:09.915 "num_base_bdevs": 4, 00:11:09.915 "num_base_bdevs_discovered": 2, 00:11:09.915 "num_base_bdevs_operational": 4, 00:11:09.915 "base_bdevs_list": [ 00:11:09.915 { 00:11:09.915 "name": null, 00:11:09.915 "uuid": "594572c8-499b-4c49-a968-504158971255", 00:11:09.915 "is_configured": false, 00:11:09.915 "data_offset": 0, 00:11:09.915 "data_size": 65536 00:11:09.915 }, 00:11:09.915 { 00:11:09.915 "name": null, 00:11:09.915 "uuid": "8332aa9b-6a39-4869-a370-6f5b391bf616", 00:11:09.915 "is_configured": false, 00:11:09.915 "data_offset": 0, 00:11:09.915 "data_size": 65536 00:11:09.915 }, 00:11:09.915 { 00:11:09.915 "name": "BaseBdev3", 00:11:09.915 "uuid": "9f7ce88d-e9df-45d0-8d23-47fcc8757871", 00:11:09.915 "is_configured": true, 00:11:09.915 "data_offset": 0, 00:11:09.915 "data_size": 65536 00:11:09.915 }, 00:11:09.915 { 00:11:09.915 "name": "BaseBdev4", 00:11:09.915 "uuid": "8c7db0e8-d7d9-4b70-8bbc-9210526ceadf", 00:11:09.915 "is_configured": true, 00:11:09.915 "data_offset": 0, 00:11:09.915 "data_size": 65536 00:11:09.915 } 00:11:09.915 ] 00:11:09.915 }' 00:11:09.915 12:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.915 12:03:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.485 12:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.485 12:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:10.485 12:03:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.485 12:03:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.485 12:03:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.485 12:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:10.485 12:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:10.485 12:03:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.485 12:03:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.485 [2024-11-19 12:03:13.610908] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:10.485 12:03:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.485 12:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:10.485 12:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:10.485 12:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:10.485 12:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:10.485 12:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:10.485 12:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:10.485 12:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.485 12:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.485 12:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.485 12:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.485 12:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:10.485 12:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.485 12:03:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.485 12:03:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.485 12:03:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.485 12:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.485 "name": "Existed_Raid", 00:11:10.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.485 "strip_size_kb": 64, 00:11:10.485 "state": "configuring", 00:11:10.485 "raid_level": "concat", 00:11:10.485 "superblock": false, 00:11:10.485 "num_base_bdevs": 4, 00:11:10.485 "num_base_bdevs_discovered": 3, 00:11:10.485 "num_base_bdevs_operational": 4, 00:11:10.485 "base_bdevs_list": [ 00:11:10.485 { 00:11:10.485 "name": null, 00:11:10.485 "uuid": "594572c8-499b-4c49-a968-504158971255", 00:11:10.485 "is_configured": false, 00:11:10.485 "data_offset": 0, 00:11:10.485 "data_size": 65536 00:11:10.485 }, 00:11:10.485 { 00:11:10.485 "name": "BaseBdev2", 00:11:10.485 "uuid": "8332aa9b-6a39-4869-a370-6f5b391bf616", 00:11:10.485 "is_configured": true, 00:11:10.485 "data_offset": 0, 00:11:10.485 "data_size": 65536 00:11:10.485 }, 00:11:10.485 { 00:11:10.485 "name": "BaseBdev3", 00:11:10.485 "uuid": "9f7ce88d-e9df-45d0-8d23-47fcc8757871", 00:11:10.485 "is_configured": true, 00:11:10.485 "data_offset": 0, 00:11:10.485 "data_size": 65536 00:11:10.485 }, 00:11:10.485 { 00:11:10.485 "name": "BaseBdev4", 00:11:10.485 "uuid": "8c7db0e8-d7d9-4b70-8bbc-9210526ceadf", 00:11:10.485 "is_configured": true, 00:11:10.485 "data_offset": 0, 00:11:10.485 "data_size": 65536 00:11:10.485 } 00:11:10.485 ] 00:11:10.485 }' 00:11:10.485 12:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.485 12:03:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.745 12:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.745 12:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:10.745 12:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.745 12:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.745 12:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.745 12:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:10.745 12:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.745 12:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.745 12:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.745 12:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:10.745 12:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.745 12:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 594572c8-499b-4c49-a968-504158971255 00:11:10.745 12:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.745 12:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.004 [2024-11-19 12:03:14.152278] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:11.004 [2024-11-19 12:03:14.152408] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:11.004 [2024-11-19 12:03:14.152432] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:11.004 [2024-11-19 12:03:14.152731] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:11.004 [2024-11-19 12:03:14.152915] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:11.004 [2024-11-19 12:03:14.152960] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:11.004 [2024-11-19 12:03:14.153259] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:11.004 NewBaseBdev 00:11:11.004 12:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.004 12:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:11.004 12:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:11.004 12:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:11.004 12:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:11.005 12:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:11.005 12:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:11.005 12:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:11.005 12:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.005 12:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.005 12:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.005 12:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:11.005 12:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.005 12:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.005 [ 00:11:11.005 { 00:11:11.005 "name": "NewBaseBdev", 00:11:11.005 "aliases": [ 00:11:11.005 "594572c8-499b-4c49-a968-504158971255" 00:11:11.005 ], 00:11:11.005 "product_name": "Malloc disk", 00:11:11.005 "block_size": 512, 00:11:11.005 "num_blocks": 65536, 00:11:11.005 "uuid": "594572c8-499b-4c49-a968-504158971255", 00:11:11.005 "assigned_rate_limits": { 00:11:11.005 "rw_ios_per_sec": 0, 00:11:11.005 "rw_mbytes_per_sec": 0, 00:11:11.005 "r_mbytes_per_sec": 0, 00:11:11.005 "w_mbytes_per_sec": 0 00:11:11.005 }, 00:11:11.005 "claimed": true, 00:11:11.005 "claim_type": "exclusive_write", 00:11:11.005 "zoned": false, 00:11:11.005 "supported_io_types": { 00:11:11.005 "read": true, 00:11:11.005 "write": true, 00:11:11.005 "unmap": true, 00:11:11.005 "flush": true, 00:11:11.005 "reset": true, 00:11:11.005 "nvme_admin": false, 00:11:11.005 "nvme_io": false, 00:11:11.005 "nvme_io_md": false, 00:11:11.005 "write_zeroes": true, 00:11:11.005 "zcopy": true, 00:11:11.005 "get_zone_info": false, 00:11:11.005 "zone_management": false, 00:11:11.005 "zone_append": false, 00:11:11.005 "compare": false, 00:11:11.005 "compare_and_write": false, 00:11:11.005 "abort": true, 00:11:11.005 "seek_hole": false, 00:11:11.005 "seek_data": false, 00:11:11.005 "copy": true, 00:11:11.005 "nvme_iov_md": false 00:11:11.005 }, 00:11:11.005 "memory_domains": [ 00:11:11.005 { 00:11:11.005 "dma_device_id": "system", 00:11:11.005 "dma_device_type": 1 00:11:11.005 }, 00:11:11.005 { 00:11:11.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.005 "dma_device_type": 2 00:11:11.005 } 00:11:11.005 ], 00:11:11.005 "driver_specific": {} 00:11:11.005 } 00:11:11.005 ] 00:11:11.005 12:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.005 12:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:11.005 12:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:11.005 12:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:11.005 12:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:11.005 12:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:11.005 12:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:11.005 12:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:11.005 12:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.005 12:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.005 12:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.005 12:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.005 12:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.005 12:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:11.005 12:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.005 12:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.005 12:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.005 12:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.005 "name": "Existed_Raid", 00:11:11.005 "uuid": "9e8d5027-a01c-4941-a0f7-2bf1c368005f", 00:11:11.005 "strip_size_kb": 64, 00:11:11.005 "state": "online", 00:11:11.005 "raid_level": "concat", 00:11:11.005 "superblock": false, 00:11:11.005 "num_base_bdevs": 4, 00:11:11.005 "num_base_bdevs_discovered": 4, 00:11:11.005 "num_base_bdevs_operational": 4, 00:11:11.005 "base_bdevs_list": [ 00:11:11.005 { 00:11:11.005 "name": "NewBaseBdev", 00:11:11.005 "uuid": "594572c8-499b-4c49-a968-504158971255", 00:11:11.005 "is_configured": true, 00:11:11.005 "data_offset": 0, 00:11:11.005 "data_size": 65536 00:11:11.005 }, 00:11:11.005 { 00:11:11.005 "name": "BaseBdev2", 00:11:11.005 "uuid": "8332aa9b-6a39-4869-a370-6f5b391bf616", 00:11:11.005 "is_configured": true, 00:11:11.005 "data_offset": 0, 00:11:11.005 "data_size": 65536 00:11:11.005 }, 00:11:11.005 { 00:11:11.005 "name": "BaseBdev3", 00:11:11.005 "uuid": "9f7ce88d-e9df-45d0-8d23-47fcc8757871", 00:11:11.005 "is_configured": true, 00:11:11.005 "data_offset": 0, 00:11:11.005 "data_size": 65536 00:11:11.005 }, 00:11:11.005 { 00:11:11.005 "name": "BaseBdev4", 00:11:11.005 "uuid": "8c7db0e8-d7d9-4b70-8bbc-9210526ceadf", 00:11:11.005 "is_configured": true, 00:11:11.005 "data_offset": 0, 00:11:11.005 "data_size": 65536 00:11:11.005 } 00:11:11.005 ] 00:11:11.005 }' 00:11:11.005 12:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.005 12:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.265 12:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:11.265 12:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:11.265 12:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:11.265 12:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:11.265 12:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:11.265 12:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:11.265 12:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:11.265 12:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:11.265 12:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.265 12:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.265 [2024-11-19 12:03:14.607917] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:11.265 12:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.525 12:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:11.525 "name": "Existed_Raid", 00:11:11.525 "aliases": [ 00:11:11.525 "9e8d5027-a01c-4941-a0f7-2bf1c368005f" 00:11:11.525 ], 00:11:11.525 "product_name": "Raid Volume", 00:11:11.525 "block_size": 512, 00:11:11.525 "num_blocks": 262144, 00:11:11.525 "uuid": "9e8d5027-a01c-4941-a0f7-2bf1c368005f", 00:11:11.525 "assigned_rate_limits": { 00:11:11.525 "rw_ios_per_sec": 0, 00:11:11.525 "rw_mbytes_per_sec": 0, 00:11:11.525 "r_mbytes_per_sec": 0, 00:11:11.525 "w_mbytes_per_sec": 0 00:11:11.525 }, 00:11:11.525 "claimed": false, 00:11:11.525 "zoned": false, 00:11:11.525 "supported_io_types": { 00:11:11.525 "read": true, 00:11:11.525 "write": true, 00:11:11.525 "unmap": true, 00:11:11.525 "flush": true, 00:11:11.525 "reset": true, 00:11:11.525 "nvme_admin": false, 00:11:11.525 "nvme_io": false, 00:11:11.525 "nvme_io_md": false, 00:11:11.525 "write_zeroes": true, 00:11:11.525 "zcopy": false, 00:11:11.525 "get_zone_info": false, 00:11:11.525 "zone_management": false, 00:11:11.525 "zone_append": false, 00:11:11.525 "compare": false, 00:11:11.525 "compare_and_write": false, 00:11:11.525 "abort": false, 00:11:11.525 "seek_hole": false, 00:11:11.525 "seek_data": false, 00:11:11.525 "copy": false, 00:11:11.525 "nvme_iov_md": false 00:11:11.525 }, 00:11:11.525 "memory_domains": [ 00:11:11.525 { 00:11:11.525 "dma_device_id": "system", 00:11:11.525 "dma_device_type": 1 00:11:11.525 }, 00:11:11.525 { 00:11:11.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.525 "dma_device_type": 2 00:11:11.525 }, 00:11:11.525 { 00:11:11.525 "dma_device_id": "system", 00:11:11.525 "dma_device_type": 1 00:11:11.525 }, 00:11:11.525 { 00:11:11.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.525 "dma_device_type": 2 00:11:11.525 }, 00:11:11.525 { 00:11:11.525 "dma_device_id": "system", 00:11:11.525 "dma_device_type": 1 00:11:11.525 }, 00:11:11.525 { 00:11:11.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.525 "dma_device_type": 2 00:11:11.525 }, 00:11:11.525 { 00:11:11.525 "dma_device_id": "system", 00:11:11.525 "dma_device_type": 1 00:11:11.525 }, 00:11:11.525 { 00:11:11.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.525 "dma_device_type": 2 00:11:11.525 } 00:11:11.525 ], 00:11:11.525 "driver_specific": { 00:11:11.525 "raid": { 00:11:11.525 "uuid": "9e8d5027-a01c-4941-a0f7-2bf1c368005f", 00:11:11.525 "strip_size_kb": 64, 00:11:11.525 "state": "online", 00:11:11.525 "raid_level": "concat", 00:11:11.525 "superblock": false, 00:11:11.525 "num_base_bdevs": 4, 00:11:11.525 "num_base_bdevs_discovered": 4, 00:11:11.525 "num_base_bdevs_operational": 4, 00:11:11.525 "base_bdevs_list": [ 00:11:11.525 { 00:11:11.525 "name": "NewBaseBdev", 00:11:11.525 "uuid": "594572c8-499b-4c49-a968-504158971255", 00:11:11.525 "is_configured": true, 00:11:11.525 "data_offset": 0, 00:11:11.525 "data_size": 65536 00:11:11.525 }, 00:11:11.525 { 00:11:11.525 "name": "BaseBdev2", 00:11:11.525 "uuid": "8332aa9b-6a39-4869-a370-6f5b391bf616", 00:11:11.525 "is_configured": true, 00:11:11.525 "data_offset": 0, 00:11:11.525 "data_size": 65536 00:11:11.525 }, 00:11:11.525 { 00:11:11.525 "name": "BaseBdev3", 00:11:11.525 "uuid": "9f7ce88d-e9df-45d0-8d23-47fcc8757871", 00:11:11.525 "is_configured": true, 00:11:11.525 "data_offset": 0, 00:11:11.525 "data_size": 65536 00:11:11.525 }, 00:11:11.525 { 00:11:11.525 "name": "BaseBdev4", 00:11:11.525 "uuid": "8c7db0e8-d7d9-4b70-8bbc-9210526ceadf", 00:11:11.525 "is_configured": true, 00:11:11.525 "data_offset": 0, 00:11:11.525 "data_size": 65536 00:11:11.525 } 00:11:11.525 ] 00:11:11.525 } 00:11:11.525 } 00:11:11.525 }' 00:11:11.525 12:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:11.525 12:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:11.525 BaseBdev2 00:11:11.525 BaseBdev3 00:11:11.525 BaseBdev4' 00:11:11.525 12:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:11.525 12:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:11.525 12:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:11.526 12:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:11.526 12:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:11.526 12:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.526 12:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.526 12:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.526 12:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:11.526 12:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:11.526 12:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:11.526 12:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:11.526 12:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:11.526 12:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.526 12:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.526 12:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.526 12:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:11.526 12:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:11.526 12:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:11.526 12:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:11.526 12:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:11.526 12:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.526 12:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.526 12:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.526 12:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:11.526 12:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:11.526 12:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:11.526 12:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:11.526 12:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:11.526 12:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.526 12:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.526 12:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.526 12:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:11.526 12:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:11.526 12:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:11.526 12:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.526 12:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.784 [2024-11-19 12:03:14.903114] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:11.784 [2024-11-19 12:03:14.903197] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:11.784 [2024-11-19 12:03:14.903304] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:11.784 [2024-11-19 12:03:14.903395] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:11.784 [2024-11-19 12:03:14.903439] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:11.784 12:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.784 12:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71285 00:11:11.784 12:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 71285 ']' 00:11:11.784 12:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 71285 00:11:11.784 12:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:11.784 12:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:11.784 12:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71285 00:11:11.784 12:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:11.784 12:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:11.784 12:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71285' 00:11:11.784 killing process with pid 71285 00:11:11.784 12:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 71285 00:11:11.784 [2024-11-19 12:03:14.951911] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:11.784 12:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 71285 00:11:12.043 [2024-11-19 12:03:15.349568] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:13.423 12:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:13.423 00:11:13.423 real 0m11.523s 00:11:13.423 user 0m18.392s 00:11:13.423 sys 0m1.964s 00:11:13.423 12:03:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:13.423 ************************************ 00:11:13.423 END TEST raid_state_function_test 00:11:13.423 ************************************ 00:11:13.423 12:03:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.423 12:03:16 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:11:13.423 12:03:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:13.423 12:03:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:13.423 12:03:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:13.423 ************************************ 00:11:13.423 START TEST raid_state_function_test_sb 00:11:13.423 ************************************ 00:11:13.423 12:03:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:11:13.423 12:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:13.423 12:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:13.423 12:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:13.423 12:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:13.423 12:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:13.423 12:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:13.423 12:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:13.423 12:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:13.423 12:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:13.423 12:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:13.423 12:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:13.423 12:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:13.423 12:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:13.423 12:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:13.423 12:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:13.423 12:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:13.423 12:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:13.423 12:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:13.423 12:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:13.423 12:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:13.423 12:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:13.423 12:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:13.423 12:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:13.423 12:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:13.423 12:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:13.423 12:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:13.423 12:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:13.423 12:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:13.423 12:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:13.423 Process raid pid: 71965 00:11:13.423 12:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=71965 00:11:13.423 12:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:13.423 12:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71965' 00:11:13.423 12:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 71965 00:11:13.423 12:03:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 71965 ']' 00:11:13.423 12:03:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:13.423 12:03:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:13.423 12:03:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:13.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:13.423 12:03:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:13.423 12:03:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.423 [2024-11-19 12:03:16.600166] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:11:13.424 [2024-11-19 12:03:16.600361] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:13.424 [2024-11-19 12:03:16.770052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.683 [2024-11-19 12:03:16.887821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.942 [2024-11-19 12:03:17.092291] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:13.942 [2024-11-19 12:03:17.092407] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:14.202 12:03:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:14.202 12:03:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:14.202 12:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:14.202 12:03:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.202 12:03:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.202 [2024-11-19 12:03:17.471523] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:14.202 [2024-11-19 12:03:17.471657] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:14.202 [2024-11-19 12:03:17.471700] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:14.202 [2024-11-19 12:03:17.471727] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:14.202 [2024-11-19 12:03:17.471754] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:14.202 [2024-11-19 12:03:17.471778] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:14.202 [2024-11-19 12:03:17.471805] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:14.202 [2024-11-19 12:03:17.471817] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:14.202 12:03:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.202 12:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:14.202 12:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:14.202 12:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:14.202 12:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:14.202 12:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:14.202 12:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:14.202 12:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.202 12:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.202 12:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.202 12:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.202 12:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:14.202 12:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.202 12:03:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.202 12:03:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.202 12:03:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.202 12:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.202 "name": "Existed_Raid", 00:11:14.202 "uuid": "eedd7ab3-e31b-4fc2-a511-66e5d94f04e5", 00:11:14.202 "strip_size_kb": 64, 00:11:14.202 "state": "configuring", 00:11:14.202 "raid_level": "concat", 00:11:14.202 "superblock": true, 00:11:14.202 "num_base_bdevs": 4, 00:11:14.202 "num_base_bdevs_discovered": 0, 00:11:14.202 "num_base_bdevs_operational": 4, 00:11:14.202 "base_bdevs_list": [ 00:11:14.202 { 00:11:14.202 "name": "BaseBdev1", 00:11:14.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.202 "is_configured": false, 00:11:14.202 "data_offset": 0, 00:11:14.202 "data_size": 0 00:11:14.202 }, 00:11:14.202 { 00:11:14.202 "name": "BaseBdev2", 00:11:14.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.202 "is_configured": false, 00:11:14.202 "data_offset": 0, 00:11:14.202 "data_size": 0 00:11:14.202 }, 00:11:14.202 { 00:11:14.202 "name": "BaseBdev3", 00:11:14.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.202 "is_configured": false, 00:11:14.202 "data_offset": 0, 00:11:14.202 "data_size": 0 00:11:14.202 }, 00:11:14.202 { 00:11:14.202 "name": "BaseBdev4", 00:11:14.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.202 "is_configured": false, 00:11:14.202 "data_offset": 0, 00:11:14.202 "data_size": 0 00:11:14.202 } 00:11:14.202 ] 00:11:14.202 }' 00:11:14.202 12:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.202 12:03:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.772 12:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:14.772 12:03:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.772 12:03:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.772 [2024-11-19 12:03:17.930708] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:14.773 [2024-11-19 12:03:17.930833] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:14.773 12:03:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.773 12:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:14.773 12:03:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.773 12:03:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.773 [2024-11-19 12:03:17.942672] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:14.773 [2024-11-19 12:03:17.942771] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:14.773 [2024-11-19 12:03:17.942797] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:14.773 [2024-11-19 12:03:17.942820] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:14.773 [2024-11-19 12:03:17.942838] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:14.773 [2024-11-19 12:03:17.942859] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:14.773 [2024-11-19 12:03:17.942876] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:14.773 [2024-11-19 12:03:17.942897] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:14.773 12:03:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.773 12:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:14.773 12:03:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.773 12:03:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.773 [2024-11-19 12:03:17.990049] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:14.773 BaseBdev1 00:11:14.773 12:03:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.773 12:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:14.773 12:03:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:14.773 12:03:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:14.773 12:03:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:14.773 12:03:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:14.773 12:03:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:14.773 12:03:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:14.773 12:03:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.773 12:03:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.773 12:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.773 12:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:14.773 12:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.773 12:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.773 [ 00:11:14.773 { 00:11:14.773 "name": "BaseBdev1", 00:11:14.773 "aliases": [ 00:11:14.773 "1fba3746-987e-4d29-a577-da7375191ffe" 00:11:14.773 ], 00:11:14.773 "product_name": "Malloc disk", 00:11:14.773 "block_size": 512, 00:11:14.773 "num_blocks": 65536, 00:11:14.773 "uuid": "1fba3746-987e-4d29-a577-da7375191ffe", 00:11:14.773 "assigned_rate_limits": { 00:11:14.773 "rw_ios_per_sec": 0, 00:11:14.773 "rw_mbytes_per_sec": 0, 00:11:14.773 "r_mbytes_per_sec": 0, 00:11:14.773 "w_mbytes_per_sec": 0 00:11:14.773 }, 00:11:14.773 "claimed": true, 00:11:14.773 "claim_type": "exclusive_write", 00:11:14.773 "zoned": false, 00:11:14.773 "supported_io_types": { 00:11:14.773 "read": true, 00:11:14.773 "write": true, 00:11:14.773 "unmap": true, 00:11:14.773 "flush": true, 00:11:14.773 "reset": true, 00:11:14.773 "nvme_admin": false, 00:11:14.773 "nvme_io": false, 00:11:14.773 "nvme_io_md": false, 00:11:14.773 "write_zeroes": true, 00:11:14.773 "zcopy": true, 00:11:14.773 "get_zone_info": false, 00:11:14.773 "zone_management": false, 00:11:14.773 "zone_append": false, 00:11:14.773 "compare": false, 00:11:14.773 "compare_and_write": false, 00:11:14.773 "abort": true, 00:11:14.773 "seek_hole": false, 00:11:14.773 "seek_data": false, 00:11:14.773 "copy": true, 00:11:14.773 "nvme_iov_md": false 00:11:14.773 }, 00:11:14.773 "memory_domains": [ 00:11:14.773 { 00:11:14.773 "dma_device_id": "system", 00:11:14.773 "dma_device_type": 1 00:11:14.773 }, 00:11:14.773 { 00:11:14.773 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.773 "dma_device_type": 2 00:11:14.773 } 00:11:14.773 ], 00:11:14.773 "driver_specific": {} 00:11:14.773 } 00:11:14.773 ] 00:11:14.773 12:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.773 12:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:14.773 12:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:14.773 12:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:14.773 12:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:14.773 12:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:14.773 12:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:14.773 12:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:14.773 12:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.773 12:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.773 12:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.773 12:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.773 12:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.773 12:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.773 12:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.773 12:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:14.773 12:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.773 12:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.773 "name": "Existed_Raid", 00:11:14.773 "uuid": "4c392e64-6040-44ff-bf1d-883bc74189df", 00:11:14.773 "strip_size_kb": 64, 00:11:14.773 "state": "configuring", 00:11:14.773 "raid_level": "concat", 00:11:14.773 "superblock": true, 00:11:14.773 "num_base_bdevs": 4, 00:11:14.773 "num_base_bdevs_discovered": 1, 00:11:14.773 "num_base_bdevs_operational": 4, 00:11:14.773 "base_bdevs_list": [ 00:11:14.773 { 00:11:14.773 "name": "BaseBdev1", 00:11:14.773 "uuid": "1fba3746-987e-4d29-a577-da7375191ffe", 00:11:14.773 "is_configured": true, 00:11:14.773 "data_offset": 2048, 00:11:14.773 "data_size": 63488 00:11:14.773 }, 00:11:14.773 { 00:11:14.773 "name": "BaseBdev2", 00:11:14.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.773 "is_configured": false, 00:11:14.773 "data_offset": 0, 00:11:14.773 "data_size": 0 00:11:14.773 }, 00:11:14.773 { 00:11:14.773 "name": "BaseBdev3", 00:11:14.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.773 "is_configured": false, 00:11:14.773 "data_offset": 0, 00:11:14.773 "data_size": 0 00:11:14.773 }, 00:11:14.773 { 00:11:14.773 "name": "BaseBdev4", 00:11:14.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.773 "is_configured": false, 00:11:14.773 "data_offset": 0, 00:11:14.773 "data_size": 0 00:11:14.773 } 00:11:14.773 ] 00:11:14.773 }' 00:11:14.773 12:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.773 12:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.343 12:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:15.343 12:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.343 12:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.343 [2024-11-19 12:03:18.445301] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:15.343 [2024-11-19 12:03:18.445432] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:15.343 12:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.343 12:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:15.343 12:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.343 12:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.343 [2024-11-19 12:03:18.457338] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:15.343 [2024-11-19 12:03:18.459224] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:15.343 [2024-11-19 12:03:18.459300] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:15.343 [2024-11-19 12:03:18.459355] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:15.343 [2024-11-19 12:03:18.459381] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:15.343 [2024-11-19 12:03:18.459408] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:15.343 [2024-11-19 12:03:18.459434] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:15.343 12:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.343 12:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:15.343 12:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:15.343 12:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:15.343 12:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:15.343 12:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:15.343 12:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:15.343 12:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:15.343 12:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:15.343 12:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.343 12:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.343 12:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.343 12:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.344 12:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.344 12:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.344 12:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.344 12:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:15.344 12:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.344 12:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.344 "name": "Existed_Raid", 00:11:15.344 "uuid": "772222e7-aac8-4c5d-8680-61fb6f3faf0f", 00:11:15.344 "strip_size_kb": 64, 00:11:15.344 "state": "configuring", 00:11:15.344 "raid_level": "concat", 00:11:15.344 "superblock": true, 00:11:15.344 "num_base_bdevs": 4, 00:11:15.344 "num_base_bdevs_discovered": 1, 00:11:15.344 "num_base_bdevs_operational": 4, 00:11:15.344 "base_bdevs_list": [ 00:11:15.344 { 00:11:15.344 "name": "BaseBdev1", 00:11:15.344 "uuid": "1fba3746-987e-4d29-a577-da7375191ffe", 00:11:15.344 "is_configured": true, 00:11:15.344 "data_offset": 2048, 00:11:15.344 "data_size": 63488 00:11:15.344 }, 00:11:15.344 { 00:11:15.344 "name": "BaseBdev2", 00:11:15.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.344 "is_configured": false, 00:11:15.344 "data_offset": 0, 00:11:15.344 "data_size": 0 00:11:15.344 }, 00:11:15.344 { 00:11:15.344 "name": "BaseBdev3", 00:11:15.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.344 "is_configured": false, 00:11:15.344 "data_offset": 0, 00:11:15.344 "data_size": 0 00:11:15.344 }, 00:11:15.344 { 00:11:15.344 "name": "BaseBdev4", 00:11:15.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.344 "is_configured": false, 00:11:15.344 "data_offset": 0, 00:11:15.344 "data_size": 0 00:11:15.344 } 00:11:15.344 ] 00:11:15.344 }' 00:11:15.344 12:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.344 12:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.603 12:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:15.603 12:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.603 12:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.603 [2024-11-19 12:03:18.955191] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:15.603 BaseBdev2 00:11:15.603 12:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.603 12:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:15.603 12:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:15.604 12:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:15.604 12:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:15.604 12:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:15.604 12:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:15.604 12:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:15.604 12:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.604 12:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.604 12:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.604 12:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:15.604 12:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.604 12:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.863 [ 00:11:15.863 { 00:11:15.863 "name": "BaseBdev2", 00:11:15.863 "aliases": [ 00:11:15.863 "fbae7237-f135-4aec-98f7-a75a1d3e0fec" 00:11:15.863 ], 00:11:15.863 "product_name": "Malloc disk", 00:11:15.863 "block_size": 512, 00:11:15.863 "num_blocks": 65536, 00:11:15.863 "uuid": "fbae7237-f135-4aec-98f7-a75a1d3e0fec", 00:11:15.863 "assigned_rate_limits": { 00:11:15.863 "rw_ios_per_sec": 0, 00:11:15.864 "rw_mbytes_per_sec": 0, 00:11:15.864 "r_mbytes_per_sec": 0, 00:11:15.864 "w_mbytes_per_sec": 0 00:11:15.864 }, 00:11:15.864 "claimed": true, 00:11:15.864 "claim_type": "exclusive_write", 00:11:15.864 "zoned": false, 00:11:15.864 "supported_io_types": { 00:11:15.864 "read": true, 00:11:15.864 "write": true, 00:11:15.864 "unmap": true, 00:11:15.864 "flush": true, 00:11:15.864 "reset": true, 00:11:15.864 "nvme_admin": false, 00:11:15.864 "nvme_io": false, 00:11:15.864 "nvme_io_md": false, 00:11:15.864 "write_zeroes": true, 00:11:15.864 "zcopy": true, 00:11:15.864 "get_zone_info": false, 00:11:15.864 "zone_management": false, 00:11:15.864 "zone_append": false, 00:11:15.864 "compare": false, 00:11:15.864 "compare_and_write": false, 00:11:15.864 "abort": true, 00:11:15.864 "seek_hole": false, 00:11:15.864 "seek_data": false, 00:11:15.864 "copy": true, 00:11:15.864 "nvme_iov_md": false 00:11:15.864 }, 00:11:15.864 "memory_domains": [ 00:11:15.864 { 00:11:15.864 "dma_device_id": "system", 00:11:15.864 "dma_device_type": 1 00:11:15.864 }, 00:11:15.864 { 00:11:15.864 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.864 "dma_device_type": 2 00:11:15.864 } 00:11:15.864 ], 00:11:15.864 "driver_specific": {} 00:11:15.864 } 00:11:15.864 ] 00:11:15.864 12:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.864 12:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:15.864 12:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:15.864 12:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:15.864 12:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:15.864 12:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:15.864 12:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:15.864 12:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:15.864 12:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:15.864 12:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:15.864 12:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.864 12:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.864 12:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.864 12:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.864 12:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.864 12:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:15.864 12:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.864 12:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.864 12:03:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.864 12:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.864 "name": "Existed_Raid", 00:11:15.864 "uuid": "772222e7-aac8-4c5d-8680-61fb6f3faf0f", 00:11:15.864 "strip_size_kb": 64, 00:11:15.864 "state": "configuring", 00:11:15.864 "raid_level": "concat", 00:11:15.864 "superblock": true, 00:11:15.864 "num_base_bdevs": 4, 00:11:15.864 "num_base_bdevs_discovered": 2, 00:11:15.864 "num_base_bdevs_operational": 4, 00:11:15.864 "base_bdevs_list": [ 00:11:15.864 { 00:11:15.864 "name": "BaseBdev1", 00:11:15.864 "uuid": "1fba3746-987e-4d29-a577-da7375191ffe", 00:11:15.864 "is_configured": true, 00:11:15.864 "data_offset": 2048, 00:11:15.864 "data_size": 63488 00:11:15.864 }, 00:11:15.864 { 00:11:15.864 "name": "BaseBdev2", 00:11:15.864 "uuid": "fbae7237-f135-4aec-98f7-a75a1d3e0fec", 00:11:15.864 "is_configured": true, 00:11:15.864 "data_offset": 2048, 00:11:15.864 "data_size": 63488 00:11:15.864 }, 00:11:15.864 { 00:11:15.864 "name": "BaseBdev3", 00:11:15.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.864 "is_configured": false, 00:11:15.864 "data_offset": 0, 00:11:15.864 "data_size": 0 00:11:15.864 }, 00:11:15.864 { 00:11:15.864 "name": "BaseBdev4", 00:11:15.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.864 "is_configured": false, 00:11:15.864 "data_offset": 0, 00:11:15.864 "data_size": 0 00:11:15.864 } 00:11:15.864 ] 00:11:15.864 }' 00:11:15.864 12:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.864 12:03:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.123 12:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:16.123 12:03:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.123 12:03:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.123 [2024-11-19 12:03:19.484455] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:16.123 BaseBdev3 00:11:16.123 12:03:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.123 12:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:16.123 12:03:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:16.123 12:03:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:16.123 12:03:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:16.123 12:03:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:16.123 12:03:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:16.123 12:03:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:16.123 12:03:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.123 12:03:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.123 12:03:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.123 12:03:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:16.383 12:03:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.383 12:03:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.383 [ 00:11:16.383 { 00:11:16.383 "name": "BaseBdev3", 00:11:16.383 "aliases": [ 00:11:16.383 "a4cd02ff-a92d-4194-ab8b-a56634b2a2f7" 00:11:16.383 ], 00:11:16.383 "product_name": "Malloc disk", 00:11:16.383 "block_size": 512, 00:11:16.383 "num_blocks": 65536, 00:11:16.383 "uuid": "a4cd02ff-a92d-4194-ab8b-a56634b2a2f7", 00:11:16.383 "assigned_rate_limits": { 00:11:16.383 "rw_ios_per_sec": 0, 00:11:16.383 "rw_mbytes_per_sec": 0, 00:11:16.383 "r_mbytes_per_sec": 0, 00:11:16.383 "w_mbytes_per_sec": 0 00:11:16.383 }, 00:11:16.383 "claimed": true, 00:11:16.383 "claim_type": "exclusive_write", 00:11:16.383 "zoned": false, 00:11:16.383 "supported_io_types": { 00:11:16.383 "read": true, 00:11:16.383 "write": true, 00:11:16.383 "unmap": true, 00:11:16.383 "flush": true, 00:11:16.383 "reset": true, 00:11:16.383 "nvme_admin": false, 00:11:16.383 "nvme_io": false, 00:11:16.383 "nvme_io_md": false, 00:11:16.383 "write_zeroes": true, 00:11:16.383 "zcopy": true, 00:11:16.383 "get_zone_info": false, 00:11:16.383 "zone_management": false, 00:11:16.383 "zone_append": false, 00:11:16.383 "compare": false, 00:11:16.383 "compare_and_write": false, 00:11:16.383 "abort": true, 00:11:16.383 "seek_hole": false, 00:11:16.383 "seek_data": false, 00:11:16.383 "copy": true, 00:11:16.383 "nvme_iov_md": false 00:11:16.383 }, 00:11:16.383 "memory_domains": [ 00:11:16.383 { 00:11:16.383 "dma_device_id": "system", 00:11:16.383 "dma_device_type": 1 00:11:16.383 }, 00:11:16.383 { 00:11:16.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.383 "dma_device_type": 2 00:11:16.383 } 00:11:16.383 ], 00:11:16.383 "driver_specific": {} 00:11:16.383 } 00:11:16.383 ] 00:11:16.383 12:03:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.383 12:03:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:16.383 12:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:16.383 12:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:16.383 12:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:16.383 12:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:16.383 12:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:16.383 12:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:16.383 12:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:16.383 12:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:16.383 12:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.383 12:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.383 12:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.383 12:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.383 12:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.383 12:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:16.383 12:03:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.383 12:03:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.383 12:03:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.383 12:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.383 "name": "Existed_Raid", 00:11:16.383 "uuid": "772222e7-aac8-4c5d-8680-61fb6f3faf0f", 00:11:16.383 "strip_size_kb": 64, 00:11:16.383 "state": "configuring", 00:11:16.383 "raid_level": "concat", 00:11:16.383 "superblock": true, 00:11:16.383 "num_base_bdevs": 4, 00:11:16.383 "num_base_bdevs_discovered": 3, 00:11:16.383 "num_base_bdevs_operational": 4, 00:11:16.383 "base_bdevs_list": [ 00:11:16.383 { 00:11:16.383 "name": "BaseBdev1", 00:11:16.383 "uuid": "1fba3746-987e-4d29-a577-da7375191ffe", 00:11:16.383 "is_configured": true, 00:11:16.383 "data_offset": 2048, 00:11:16.383 "data_size": 63488 00:11:16.383 }, 00:11:16.383 { 00:11:16.383 "name": "BaseBdev2", 00:11:16.383 "uuid": "fbae7237-f135-4aec-98f7-a75a1d3e0fec", 00:11:16.383 "is_configured": true, 00:11:16.383 "data_offset": 2048, 00:11:16.383 "data_size": 63488 00:11:16.383 }, 00:11:16.383 { 00:11:16.383 "name": "BaseBdev3", 00:11:16.383 "uuid": "a4cd02ff-a92d-4194-ab8b-a56634b2a2f7", 00:11:16.383 "is_configured": true, 00:11:16.383 "data_offset": 2048, 00:11:16.383 "data_size": 63488 00:11:16.383 }, 00:11:16.383 { 00:11:16.383 "name": "BaseBdev4", 00:11:16.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.383 "is_configured": false, 00:11:16.384 "data_offset": 0, 00:11:16.384 "data_size": 0 00:11:16.384 } 00:11:16.384 ] 00:11:16.384 }' 00:11:16.384 12:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.384 12:03:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.643 12:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:16.643 12:03:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.643 12:03:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.643 [2024-11-19 12:03:20.014580] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:16.644 [2024-11-19 12:03:20.014945] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:16.644 [2024-11-19 12:03:20.015018] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:16.644 [2024-11-19 12:03:20.015315] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:16.644 [2024-11-19 12:03:20.015506] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:16.644 BaseBdev4 00:11:16.644 [2024-11-19 12:03:20.015552] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:16.644 [2024-11-19 12:03:20.015698] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:16.644 12:03:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.644 12:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:16.644 12:03:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:16.644 12:03:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:16.644 12:03:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:16.903 12:03:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:16.903 12:03:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:16.903 12:03:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:16.903 12:03:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.903 12:03:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.903 12:03:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.903 12:03:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:16.903 12:03:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.903 12:03:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.903 [ 00:11:16.903 { 00:11:16.903 "name": "BaseBdev4", 00:11:16.903 "aliases": [ 00:11:16.903 "900b2617-9e62-4719-8409-ec6c58553431" 00:11:16.903 ], 00:11:16.903 "product_name": "Malloc disk", 00:11:16.903 "block_size": 512, 00:11:16.903 "num_blocks": 65536, 00:11:16.903 "uuid": "900b2617-9e62-4719-8409-ec6c58553431", 00:11:16.903 "assigned_rate_limits": { 00:11:16.903 "rw_ios_per_sec": 0, 00:11:16.903 "rw_mbytes_per_sec": 0, 00:11:16.903 "r_mbytes_per_sec": 0, 00:11:16.903 "w_mbytes_per_sec": 0 00:11:16.903 }, 00:11:16.903 "claimed": true, 00:11:16.903 "claim_type": "exclusive_write", 00:11:16.903 "zoned": false, 00:11:16.903 "supported_io_types": { 00:11:16.903 "read": true, 00:11:16.903 "write": true, 00:11:16.903 "unmap": true, 00:11:16.903 "flush": true, 00:11:16.903 "reset": true, 00:11:16.903 "nvme_admin": false, 00:11:16.903 "nvme_io": false, 00:11:16.903 "nvme_io_md": false, 00:11:16.903 "write_zeroes": true, 00:11:16.903 "zcopy": true, 00:11:16.903 "get_zone_info": false, 00:11:16.903 "zone_management": false, 00:11:16.903 "zone_append": false, 00:11:16.903 "compare": false, 00:11:16.903 "compare_and_write": false, 00:11:16.903 "abort": true, 00:11:16.903 "seek_hole": false, 00:11:16.903 "seek_data": false, 00:11:16.903 "copy": true, 00:11:16.903 "nvme_iov_md": false 00:11:16.903 }, 00:11:16.903 "memory_domains": [ 00:11:16.903 { 00:11:16.903 "dma_device_id": "system", 00:11:16.903 "dma_device_type": 1 00:11:16.903 }, 00:11:16.903 { 00:11:16.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.903 "dma_device_type": 2 00:11:16.903 } 00:11:16.903 ], 00:11:16.903 "driver_specific": {} 00:11:16.903 } 00:11:16.903 ] 00:11:16.903 12:03:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.903 12:03:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:16.903 12:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:16.903 12:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:16.903 12:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:16.903 12:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:16.903 12:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:16.903 12:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:16.903 12:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:16.904 12:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:16.904 12:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.904 12:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.904 12:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.904 12:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.904 12:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.904 12:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:16.904 12:03:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.904 12:03:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.904 12:03:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.904 12:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.904 "name": "Existed_Raid", 00:11:16.904 "uuid": "772222e7-aac8-4c5d-8680-61fb6f3faf0f", 00:11:16.904 "strip_size_kb": 64, 00:11:16.904 "state": "online", 00:11:16.904 "raid_level": "concat", 00:11:16.904 "superblock": true, 00:11:16.904 "num_base_bdevs": 4, 00:11:16.904 "num_base_bdevs_discovered": 4, 00:11:16.904 "num_base_bdevs_operational": 4, 00:11:16.904 "base_bdevs_list": [ 00:11:16.904 { 00:11:16.904 "name": "BaseBdev1", 00:11:16.904 "uuid": "1fba3746-987e-4d29-a577-da7375191ffe", 00:11:16.904 "is_configured": true, 00:11:16.904 "data_offset": 2048, 00:11:16.904 "data_size": 63488 00:11:16.904 }, 00:11:16.904 { 00:11:16.904 "name": "BaseBdev2", 00:11:16.904 "uuid": "fbae7237-f135-4aec-98f7-a75a1d3e0fec", 00:11:16.904 "is_configured": true, 00:11:16.904 "data_offset": 2048, 00:11:16.904 "data_size": 63488 00:11:16.904 }, 00:11:16.904 { 00:11:16.904 "name": "BaseBdev3", 00:11:16.904 "uuid": "a4cd02ff-a92d-4194-ab8b-a56634b2a2f7", 00:11:16.904 "is_configured": true, 00:11:16.904 "data_offset": 2048, 00:11:16.904 "data_size": 63488 00:11:16.904 }, 00:11:16.904 { 00:11:16.904 "name": "BaseBdev4", 00:11:16.904 "uuid": "900b2617-9e62-4719-8409-ec6c58553431", 00:11:16.904 "is_configured": true, 00:11:16.904 "data_offset": 2048, 00:11:16.904 "data_size": 63488 00:11:16.904 } 00:11:16.904 ] 00:11:16.904 }' 00:11:16.904 12:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.904 12:03:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.163 12:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:17.163 12:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:17.163 12:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:17.163 12:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:17.163 12:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:17.163 12:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:17.163 12:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:17.163 12:03:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.163 12:03:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.163 12:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:17.163 [2024-11-19 12:03:20.482234] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:17.163 12:03:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.163 12:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:17.163 "name": "Existed_Raid", 00:11:17.163 "aliases": [ 00:11:17.163 "772222e7-aac8-4c5d-8680-61fb6f3faf0f" 00:11:17.163 ], 00:11:17.163 "product_name": "Raid Volume", 00:11:17.163 "block_size": 512, 00:11:17.163 "num_blocks": 253952, 00:11:17.163 "uuid": "772222e7-aac8-4c5d-8680-61fb6f3faf0f", 00:11:17.163 "assigned_rate_limits": { 00:11:17.163 "rw_ios_per_sec": 0, 00:11:17.163 "rw_mbytes_per_sec": 0, 00:11:17.163 "r_mbytes_per_sec": 0, 00:11:17.163 "w_mbytes_per_sec": 0 00:11:17.163 }, 00:11:17.163 "claimed": false, 00:11:17.163 "zoned": false, 00:11:17.163 "supported_io_types": { 00:11:17.163 "read": true, 00:11:17.163 "write": true, 00:11:17.163 "unmap": true, 00:11:17.163 "flush": true, 00:11:17.163 "reset": true, 00:11:17.163 "nvme_admin": false, 00:11:17.163 "nvme_io": false, 00:11:17.163 "nvme_io_md": false, 00:11:17.163 "write_zeroes": true, 00:11:17.163 "zcopy": false, 00:11:17.163 "get_zone_info": false, 00:11:17.163 "zone_management": false, 00:11:17.163 "zone_append": false, 00:11:17.163 "compare": false, 00:11:17.163 "compare_and_write": false, 00:11:17.163 "abort": false, 00:11:17.163 "seek_hole": false, 00:11:17.163 "seek_data": false, 00:11:17.163 "copy": false, 00:11:17.163 "nvme_iov_md": false 00:11:17.163 }, 00:11:17.163 "memory_domains": [ 00:11:17.163 { 00:11:17.163 "dma_device_id": "system", 00:11:17.163 "dma_device_type": 1 00:11:17.163 }, 00:11:17.163 { 00:11:17.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.163 "dma_device_type": 2 00:11:17.163 }, 00:11:17.163 { 00:11:17.163 "dma_device_id": "system", 00:11:17.163 "dma_device_type": 1 00:11:17.163 }, 00:11:17.163 { 00:11:17.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.163 "dma_device_type": 2 00:11:17.163 }, 00:11:17.163 { 00:11:17.163 "dma_device_id": "system", 00:11:17.163 "dma_device_type": 1 00:11:17.163 }, 00:11:17.163 { 00:11:17.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.163 "dma_device_type": 2 00:11:17.163 }, 00:11:17.163 { 00:11:17.163 "dma_device_id": "system", 00:11:17.163 "dma_device_type": 1 00:11:17.163 }, 00:11:17.163 { 00:11:17.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.163 "dma_device_type": 2 00:11:17.163 } 00:11:17.163 ], 00:11:17.163 "driver_specific": { 00:11:17.163 "raid": { 00:11:17.163 "uuid": "772222e7-aac8-4c5d-8680-61fb6f3faf0f", 00:11:17.163 "strip_size_kb": 64, 00:11:17.163 "state": "online", 00:11:17.163 "raid_level": "concat", 00:11:17.163 "superblock": true, 00:11:17.163 "num_base_bdevs": 4, 00:11:17.163 "num_base_bdevs_discovered": 4, 00:11:17.163 "num_base_bdevs_operational": 4, 00:11:17.163 "base_bdevs_list": [ 00:11:17.163 { 00:11:17.163 "name": "BaseBdev1", 00:11:17.163 "uuid": "1fba3746-987e-4d29-a577-da7375191ffe", 00:11:17.163 "is_configured": true, 00:11:17.163 "data_offset": 2048, 00:11:17.163 "data_size": 63488 00:11:17.163 }, 00:11:17.163 { 00:11:17.163 "name": "BaseBdev2", 00:11:17.163 "uuid": "fbae7237-f135-4aec-98f7-a75a1d3e0fec", 00:11:17.163 "is_configured": true, 00:11:17.163 "data_offset": 2048, 00:11:17.163 "data_size": 63488 00:11:17.163 }, 00:11:17.163 { 00:11:17.163 "name": "BaseBdev3", 00:11:17.163 "uuid": "a4cd02ff-a92d-4194-ab8b-a56634b2a2f7", 00:11:17.163 "is_configured": true, 00:11:17.163 "data_offset": 2048, 00:11:17.163 "data_size": 63488 00:11:17.163 }, 00:11:17.163 { 00:11:17.163 "name": "BaseBdev4", 00:11:17.163 "uuid": "900b2617-9e62-4719-8409-ec6c58553431", 00:11:17.163 "is_configured": true, 00:11:17.163 "data_offset": 2048, 00:11:17.163 "data_size": 63488 00:11:17.163 } 00:11:17.164 ] 00:11:17.164 } 00:11:17.164 } 00:11:17.164 }' 00:11:17.164 12:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:17.422 12:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:17.422 BaseBdev2 00:11:17.422 BaseBdev3 00:11:17.422 BaseBdev4' 00:11:17.422 12:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.422 12:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:17.423 12:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:17.423 12:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:17.423 12:03:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.423 12:03:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.423 12:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.423 12:03:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.423 12:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:17.423 12:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:17.423 12:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:17.423 12:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:17.423 12:03:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.423 12:03:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.423 12:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.423 12:03:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.423 12:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:17.423 12:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:17.423 12:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:17.423 12:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:17.423 12:03:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.423 12:03:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.423 12:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.423 12:03:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.423 12:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:17.423 12:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:17.423 12:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:17.423 12:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:17.423 12:03:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.423 12:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.423 12:03:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.682 12:03:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.682 12:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:17.682 12:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:17.682 12:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:17.682 12:03:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.682 12:03:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.682 [2024-11-19 12:03:20.833301] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:17.682 [2024-11-19 12:03:20.833371] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:17.682 [2024-11-19 12:03:20.833450] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:17.682 12:03:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.682 12:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:17.682 12:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:17.682 12:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:17.682 12:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:17.682 12:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:17.682 12:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:11:17.682 12:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:17.682 12:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:17.682 12:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:17.682 12:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:17.682 12:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:17.682 12:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.682 12:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.682 12:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.682 12:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.682 12:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.682 12:03:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.682 12:03:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.682 12:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:17.682 12:03:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.682 12:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.682 "name": "Existed_Raid", 00:11:17.682 "uuid": "772222e7-aac8-4c5d-8680-61fb6f3faf0f", 00:11:17.682 "strip_size_kb": 64, 00:11:17.682 "state": "offline", 00:11:17.682 "raid_level": "concat", 00:11:17.682 "superblock": true, 00:11:17.682 "num_base_bdevs": 4, 00:11:17.682 "num_base_bdevs_discovered": 3, 00:11:17.682 "num_base_bdevs_operational": 3, 00:11:17.682 "base_bdevs_list": [ 00:11:17.682 { 00:11:17.682 "name": null, 00:11:17.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.683 "is_configured": false, 00:11:17.683 "data_offset": 0, 00:11:17.683 "data_size": 63488 00:11:17.683 }, 00:11:17.683 { 00:11:17.683 "name": "BaseBdev2", 00:11:17.683 "uuid": "fbae7237-f135-4aec-98f7-a75a1d3e0fec", 00:11:17.683 "is_configured": true, 00:11:17.683 "data_offset": 2048, 00:11:17.683 "data_size": 63488 00:11:17.683 }, 00:11:17.683 { 00:11:17.683 "name": "BaseBdev3", 00:11:17.683 "uuid": "a4cd02ff-a92d-4194-ab8b-a56634b2a2f7", 00:11:17.683 "is_configured": true, 00:11:17.683 "data_offset": 2048, 00:11:17.683 "data_size": 63488 00:11:17.683 }, 00:11:17.683 { 00:11:17.683 "name": "BaseBdev4", 00:11:17.683 "uuid": "900b2617-9e62-4719-8409-ec6c58553431", 00:11:17.683 "is_configured": true, 00:11:17.683 "data_offset": 2048, 00:11:17.683 "data_size": 63488 00:11:17.683 } 00:11:17.683 ] 00:11:17.683 }' 00:11:17.683 12:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.683 12:03:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.257 12:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:18.257 12:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:18.257 12:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:18.257 12:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.257 12:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.257 12:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.257 12:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.257 12:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:18.257 12:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:18.257 12:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:18.257 12:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.257 12:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.257 [2024-11-19 12:03:21.431711] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:18.257 12:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.257 12:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:18.257 12:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:18.257 12:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.257 12:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:18.257 12:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.257 12:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.257 12:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.257 12:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:18.257 12:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:18.257 12:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:18.257 12:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.257 12:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.257 [2024-11-19 12:03:21.582841] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:18.529 12:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.529 12:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:18.529 12:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:18.529 12:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.529 12:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:18.529 12:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.529 12:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.529 12:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.529 12:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:18.529 12:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:18.529 12:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:18.529 12:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.529 12:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.529 [2024-11-19 12:03:21.740195] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:18.529 [2024-11-19 12:03:21.740286] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:18.529 12:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.529 12:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:18.529 12:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:18.529 12:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.529 12:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:18.529 12:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.529 12:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.529 12:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.529 12:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:18.529 12:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:18.529 12:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:18.529 12:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:18.529 12:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:18.529 12:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:18.529 12:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.529 12:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.790 BaseBdev2 00:11:18.790 12:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.790 12:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:18.790 12:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:18.790 12:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:18.790 12:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:18.790 12:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:18.790 12:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:18.790 12:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:18.790 12:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.790 12:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.790 12:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.790 12:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:18.790 12:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.790 12:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.790 [ 00:11:18.790 { 00:11:18.790 "name": "BaseBdev2", 00:11:18.790 "aliases": [ 00:11:18.790 "4a01fdd9-337d-45c0-9325-ab692380ca48" 00:11:18.790 ], 00:11:18.790 "product_name": "Malloc disk", 00:11:18.790 "block_size": 512, 00:11:18.790 "num_blocks": 65536, 00:11:18.790 "uuid": "4a01fdd9-337d-45c0-9325-ab692380ca48", 00:11:18.790 "assigned_rate_limits": { 00:11:18.790 "rw_ios_per_sec": 0, 00:11:18.790 "rw_mbytes_per_sec": 0, 00:11:18.790 "r_mbytes_per_sec": 0, 00:11:18.790 "w_mbytes_per_sec": 0 00:11:18.790 }, 00:11:18.790 "claimed": false, 00:11:18.790 "zoned": false, 00:11:18.790 "supported_io_types": { 00:11:18.790 "read": true, 00:11:18.790 "write": true, 00:11:18.790 "unmap": true, 00:11:18.790 "flush": true, 00:11:18.790 "reset": true, 00:11:18.790 "nvme_admin": false, 00:11:18.790 "nvme_io": false, 00:11:18.790 "nvme_io_md": false, 00:11:18.790 "write_zeroes": true, 00:11:18.790 "zcopy": true, 00:11:18.790 "get_zone_info": false, 00:11:18.790 "zone_management": false, 00:11:18.790 "zone_append": false, 00:11:18.790 "compare": false, 00:11:18.790 "compare_and_write": false, 00:11:18.790 "abort": true, 00:11:18.790 "seek_hole": false, 00:11:18.790 "seek_data": false, 00:11:18.790 "copy": true, 00:11:18.790 "nvme_iov_md": false 00:11:18.790 }, 00:11:18.790 "memory_domains": [ 00:11:18.790 { 00:11:18.790 "dma_device_id": "system", 00:11:18.790 "dma_device_type": 1 00:11:18.790 }, 00:11:18.790 { 00:11:18.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.790 "dma_device_type": 2 00:11:18.790 } 00:11:18.790 ], 00:11:18.790 "driver_specific": {} 00:11:18.790 } 00:11:18.790 ] 00:11:18.790 12:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.790 12:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:18.790 12:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:18.790 12:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:18.790 12:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:18.790 12:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.790 12:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.790 BaseBdev3 00:11:18.790 12:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.790 12:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:18.790 12:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:18.790 12:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:18.790 12:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:18.790 12:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:18.790 12:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:18.790 12:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:18.790 12:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.790 12:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.790 12:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.790 12:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:18.790 12:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.790 12:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.790 [ 00:11:18.790 { 00:11:18.790 "name": "BaseBdev3", 00:11:18.790 "aliases": [ 00:11:18.790 "06e653a9-3cfa-466b-88d7-648aaab3ce2f" 00:11:18.790 ], 00:11:18.790 "product_name": "Malloc disk", 00:11:18.790 "block_size": 512, 00:11:18.790 "num_blocks": 65536, 00:11:18.790 "uuid": "06e653a9-3cfa-466b-88d7-648aaab3ce2f", 00:11:18.790 "assigned_rate_limits": { 00:11:18.790 "rw_ios_per_sec": 0, 00:11:18.790 "rw_mbytes_per_sec": 0, 00:11:18.790 "r_mbytes_per_sec": 0, 00:11:18.790 "w_mbytes_per_sec": 0 00:11:18.790 }, 00:11:18.790 "claimed": false, 00:11:18.790 "zoned": false, 00:11:18.790 "supported_io_types": { 00:11:18.790 "read": true, 00:11:18.790 "write": true, 00:11:18.790 "unmap": true, 00:11:18.790 "flush": true, 00:11:18.790 "reset": true, 00:11:18.790 "nvme_admin": false, 00:11:18.790 "nvme_io": false, 00:11:18.790 "nvme_io_md": false, 00:11:18.790 "write_zeroes": true, 00:11:18.790 "zcopy": true, 00:11:18.790 "get_zone_info": false, 00:11:18.790 "zone_management": false, 00:11:18.790 "zone_append": false, 00:11:18.790 "compare": false, 00:11:18.790 "compare_and_write": false, 00:11:18.790 "abort": true, 00:11:18.790 "seek_hole": false, 00:11:18.790 "seek_data": false, 00:11:18.790 "copy": true, 00:11:18.790 "nvme_iov_md": false 00:11:18.790 }, 00:11:18.790 "memory_domains": [ 00:11:18.790 { 00:11:18.790 "dma_device_id": "system", 00:11:18.790 "dma_device_type": 1 00:11:18.790 }, 00:11:18.790 { 00:11:18.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.790 "dma_device_type": 2 00:11:18.790 } 00:11:18.790 ], 00:11:18.790 "driver_specific": {} 00:11:18.790 } 00:11:18.790 ] 00:11:18.790 12:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.790 12:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:18.790 12:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:18.790 12:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:18.790 12:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:18.790 12:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.790 12:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.790 BaseBdev4 00:11:18.790 12:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.790 12:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:18.790 12:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:18.790 12:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:18.790 12:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:18.790 12:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:18.790 12:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:18.790 12:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:18.790 12:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.790 12:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.790 12:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.790 12:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:18.790 12:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.791 12:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.791 [ 00:11:18.791 { 00:11:18.791 "name": "BaseBdev4", 00:11:18.791 "aliases": [ 00:11:18.791 "06143ebe-f68a-404f-be20-65c2a8459b47" 00:11:18.791 ], 00:11:18.791 "product_name": "Malloc disk", 00:11:18.791 "block_size": 512, 00:11:18.791 "num_blocks": 65536, 00:11:18.791 "uuid": "06143ebe-f68a-404f-be20-65c2a8459b47", 00:11:18.791 "assigned_rate_limits": { 00:11:18.791 "rw_ios_per_sec": 0, 00:11:18.791 "rw_mbytes_per_sec": 0, 00:11:18.791 "r_mbytes_per_sec": 0, 00:11:18.791 "w_mbytes_per_sec": 0 00:11:18.791 }, 00:11:18.791 "claimed": false, 00:11:18.791 "zoned": false, 00:11:18.791 "supported_io_types": { 00:11:18.791 "read": true, 00:11:18.791 "write": true, 00:11:18.791 "unmap": true, 00:11:18.791 "flush": true, 00:11:18.791 "reset": true, 00:11:18.791 "nvme_admin": false, 00:11:18.791 "nvme_io": false, 00:11:18.791 "nvme_io_md": false, 00:11:18.791 "write_zeroes": true, 00:11:18.791 "zcopy": true, 00:11:18.791 "get_zone_info": false, 00:11:18.791 "zone_management": false, 00:11:18.791 "zone_append": false, 00:11:18.791 "compare": false, 00:11:18.791 "compare_and_write": false, 00:11:18.791 "abort": true, 00:11:18.791 "seek_hole": false, 00:11:18.791 "seek_data": false, 00:11:18.791 "copy": true, 00:11:18.791 "nvme_iov_md": false 00:11:18.791 }, 00:11:18.791 "memory_domains": [ 00:11:18.791 { 00:11:18.791 "dma_device_id": "system", 00:11:18.791 "dma_device_type": 1 00:11:18.791 }, 00:11:18.791 { 00:11:18.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.791 "dma_device_type": 2 00:11:18.791 } 00:11:18.791 ], 00:11:18.791 "driver_specific": {} 00:11:18.791 } 00:11:18.791 ] 00:11:18.791 12:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.791 12:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:18.791 12:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:18.791 12:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:18.791 12:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:18.791 12:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.791 12:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.791 [2024-11-19 12:03:22.142873] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:18.791 [2024-11-19 12:03:22.142953] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:18.791 [2024-11-19 12:03:22.143033] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:18.791 [2024-11-19 12:03:22.144822] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:18.791 [2024-11-19 12:03:22.144928] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:18.791 12:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.791 12:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:18.791 12:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:18.791 12:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:18.791 12:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:18.791 12:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:18.791 12:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:18.791 12:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.791 12:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.791 12:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.791 12:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.791 12:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.791 12:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:18.791 12:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.791 12:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.051 12:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.051 12:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.051 "name": "Existed_Raid", 00:11:19.051 "uuid": "08cc0ffd-ad1f-4e10-8425-506245b4c14b", 00:11:19.051 "strip_size_kb": 64, 00:11:19.051 "state": "configuring", 00:11:19.051 "raid_level": "concat", 00:11:19.051 "superblock": true, 00:11:19.051 "num_base_bdevs": 4, 00:11:19.051 "num_base_bdevs_discovered": 3, 00:11:19.051 "num_base_bdevs_operational": 4, 00:11:19.051 "base_bdevs_list": [ 00:11:19.051 { 00:11:19.051 "name": "BaseBdev1", 00:11:19.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.051 "is_configured": false, 00:11:19.051 "data_offset": 0, 00:11:19.051 "data_size": 0 00:11:19.051 }, 00:11:19.051 { 00:11:19.051 "name": "BaseBdev2", 00:11:19.051 "uuid": "4a01fdd9-337d-45c0-9325-ab692380ca48", 00:11:19.051 "is_configured": true, 00:11:19.051 "data_offset": 2048, 00:11:19.051 "data_size": 63488 00:11:19.051 }, 00:11:19.051 { 00:11:19.051 "name": "BaseBdev3", 00:11:19.051 "uuid": "06e653a9-3cfa-466b-88d7-648aaab3ce2f", 00:11:19.051 "is_configured": true, 00:11:19.051 "data_offset": 2048, 00:11:19.051 "data_size": 63488 00:11:19.051 }, 00:11:19.051 { 00:11:19.051 "name": "BaseBdev4", 00:11:19.051 "uuid": "06143ebe-f68a-404f-be20-65c2a8459b47", 00:11:19.051 "is_configured": true, 00:11:19.051 "data_offset": 2048, 00:11:19.051 "data_size": 63488 00:11:19.051 } 00:11:19.051 ] 00:11:19.051 }' 00:11:19.051 12:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.051 12:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.311 12:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:19.311 12:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.311 12:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.311 [2024-11-19 12:03:22.598135] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:19.311 12:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.311 12:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:19.311 12:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:19.311 12:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:19.311 12:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:19.311 12:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:19.311 12:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:19.311 12:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.311 12:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.311 12:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.311 12:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.311 12:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.311 12:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.311 12:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:19.311 12:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.311 12:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.311 12:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.311 "name": "Existed_Raid", 00:11:19.311 "uuid": "08cc0ffd-ad1f-4e10-8425-506245b4c14b", 00:11:19.311 "strip_size_kb": 64, 00:11:19.311 "state": "configuring", 00:11:19.311 "raid_level": "concat", 00:11:19.311 "superblock": true, 00:11:19.311 "num_base_bdevs": 4, 00:11:19.311 "num_base_bdevs_discovered": 2, 00:11:19.311 "num_base_bdevs_operational": 4, 00:11:19.311 "base_bdevs_list": [ 00:11:19.311 { 00:11:19.311 "name": "BaseBdev1", 00:11:19.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.311 "is_configured": false, 00:11:19.311 "data_offset": 0, 00:11:19.311 "data_size": 0 00:11:19.311 }, 00:11:19.311 { 00:11:19.311 "name": null, 00:11:19.311 "uuid": "4a01fdd9-337d-45c0-9325-ab692380ca48", 00:11:19.311 "is_configured": false, 00:11:19.311 "data_offset": 0, 00:11:19.311 "data_size": 63488 00:11:19.311 }, 00:11:19.311 { 00:11:19.311 "name": "BaseBdev3", 00:11:19.311 "uuid": "06e653a9-3cfa-466b-88d7-648aaab3ce2f", 00:11:19.311 "is_configured": true, 00:11:19.311 "data_offset": 2048, 00:11:19.311 "data_size": 63488 00:11:19.311 }, 00:11:19.311 { 00:11:19.311 "name": "BaseBdev4", 00:11:19.311 "uuid": "06143ebe-f68a-404f-be20-65c2a8459b47", 00:11:19.311 "is_configured": true, 00:11:19.311 "data_offset": 2048, 00:11:19.311 "data_size": 63488 00:11:19.311 } 00:11:19.311 ] 00:11:19.311 }' 00:11:19.311 12:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.311 12:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.879 12:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:19.879 12:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.879 12:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.879 12:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.879 12:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.879 12:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:19.879 12:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:19.879 12:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.879 12:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.879 [2024-11-19 12:03:23.121515] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:19.879 BaseBdev1 00:11:19.879 12:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.879 12:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:19.879 12:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:19.879 12:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:19.879 12:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:19.879 12:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:19.879 12:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:19.879 12:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:19.879 12:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.879 12:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.879 12:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.879 12:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:19.879 12:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.879 12:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.879 [ 00:11:19.879 { 00:11:19.879 "name": "BaseBdev1", 00:11:19.879 "aliases": [ 00:11:19.879 "e090bc4c-3d73-497c-a51a-674c4dfa7222" 00:11:19.879 ], 00:11:19.879 "product_name": "Malloc disk", 00:11:19.879 "block_size": 512, 00:11:19.879 "num_blocks": 65536, 00:11:19.879 "uuid": "e090bc4c-3d73-497c-a51a-674c4dfa7222", 00:11:19.879 "assigned_rate_limits": { 00:11:19.879 "rw_ios_per_sec": 0, 00:11:19.879 "rw_mbytes_per_sec": 0, 00:11:19.879 "r_mbytes_per_sec": 0, 00:11:19.879 "w_mbytes_per_sec": 0 00:11:19.879 }, 00:11:19.879 "claimed": true, 00:11:19.879 "claim_type": "exclusive_write", 00:11:19.879 "zoned": false, 00:11:19.879 "supported_io_types": { 00:11:19.879 "read": true, 00:11:19.879 "write": true, 00:11:19.879 "unmap": true, 00:11:19.879 "flush": true, 00:11:19.879 "reset": true, 00:11:19.879 "nvme_admin": false, 00:11:19.879 "nvme_io": false, 00:11:19.879 "nvme_io_md": false, 00:11:19.879 "write_zeroes": true, 00:11:19.879 "zcopy": true, 00:11:19.879 "get_zone_info": false, 00:11:19.879 "zone_management": false, 00:11:19.879 "zone_append": false, 00:11:19.879 "compare": false, 00:11:19.879 "compare_and_write": false, 00:11:19.879 "abort": true, 00:11:19.879 "seek_hole": false, 00:11:19.879 "seek_data": false, 00:11:19.879 "copy": true, 00:11:19.879 "nvme_iov_md": false 00:11:19.879 }, 00:11:19.879 "memory_domains": [ 00:11:19.879 { 00:11:19.879 "dma_device_id": "system", 00:11:19.879 "dma_device_type": 1 00:11:19.879 }, 00:11:19.879 { 00:11:19.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.879 "dma_device_type": 2 00:11:19.879 } 00:11:19.879 ], 00:11:19.879 "driver_specific": {} 00:11:19.879 } 00:11:19.879 ] 00:11:19.879 12:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.879 12:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:19.879 12:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:19.879 12:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:19.879 12:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:19.879 12:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:19.879 12:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:19.879 12:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:19.879 12:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.879 12:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.879 12:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.879 12:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.879 12:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.879 12:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.879 12:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.879 12:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:19.879 12:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.879 12:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.879 "name": "Existed_Raid", 00:11:19.879 "uuid": "08cc0ffd-ad1f-4e10-8425-506245b4c14b", 00:11:19.879 "strip_size_kb": 64, 00:11:19.879 "state": "configuring", 00:11:19.879 "raid_level": "concat", 00:11:19.879 "superblock": true, 00:11:19.879 "num_base_bdevs": 4, 00:11:19.879 "num_base_bdevs_discovered": 3, 00:11:19.879 "num_base_bdevs_operational": 4, 00:11:19.879 "base_bdevs_list": [ 00:11:19.879 { 00:11:19.879 "name": "BaseBdev1", 00:11:19.879 "uuid": "e090bc4c-3d73-497c-a51a-674c4dfa7222", 00:11:19.879 "is_configured": true, 00:11:19.879 "data_offset": 2048, 00:11:19.879 "data_size": 63488 00:11:19.879 }, 00:11:19.879 { 00:11:19.879 "name": null, 00:11:19.879 "uuid": "4a01fdd9-337d-45c0-9325-ab692380ca48", 00:11:19.879 "is_configured": false, 00:11:19.879 "data_offset": 0, 00:11:19.879 "data_size": 63488 00:11:19.879 }, 00:11:19.879 { 00:11:19.879 "name": "BaseBdev3", 00:11:19.879 "uuid": "06e653a9-3cfa-466b-88d7-648aaab3ce2f", 00:11:19.879 "is_configured": true, 00:11:19.879 "data_offset": 2048, 00:11:19.879 "data_size": 63488 00:11:19.879 }, 00:11:19.879 { 00:11:19.879 "name": "BaseBdev4", 00:11:19.879 "uuid": "06143ebe-f68a-404f-be20-65c2a8459b47", 00:11:19.879 "is_configured": true, 00:11:19.879 "data_offset": 2048, 00:11:19.879 "data_size": 63488 00:11:19.879 } 00:11:19.879 ] 00:11:19.879 }' 00:11:19.879 12:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.879 12:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.447 12:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.447 12:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.447 12:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.447 12:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:20.447 12:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.447 12:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:20.447 12:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:20.447 12:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.447 12:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.447 [2024-11-19 12:03:23.668634] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:20.447 12:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.447 12:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:20.447 12:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:20.447 12:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:20.447 12:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:20.447 12:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:20.447 12:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:20.447 12:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.447 12:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.447 12:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.447 12:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.447 12:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.447 12:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.447 12:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.447 12:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.447 12:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.447 12:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.447 "name": "Existed_Raid", 00:11:20.447 "uuid": "08cc0ffd-ad1f-4e10-8425-506245b4c14b", 00:11:20.447 "strip_size_kb": 64, 00:11:20.447 "state": "configuring", 00:11:20.447 "raid_level": "concat", 00:11:20.447 "superblock": true, 00:11:20.447 "num_base_bdevs": 4, 00:11:20.447 "num_base_bdevs_discovered": 2, 00:11:20.447 "num_base_bdevs_operational": 4, 00:11:20.447 "base_bdevs_list": [ 00:11:20.447 { 00:11:20.447 "name": "BaseBdev1", 00:11:20.447 "uuid": "e090bc4c-3d73-497c-a51a-674c4dfa7222", 00:11:20.447 "is_configured": true, 00:11:20.447 "data_offset": 2048, 00:11:20.447 "data_size": 63488 00:11:20.447 }, 00:11:20.447 { 00:11:20.447 "name": null, 00:11:20.447 "uuid": "4a01fdd9-337d-45c0-9325-ab692380ca48", 00:11:20.447 "is_configured": false, 00:11:20.447 "data_offset": 0, 00:11:20.447 "data_size": 63488 00:11:20.447 }, 00:11:20.447 { 00:11:20.447 "name": null, 00:11:20.447 "uuid": "06e653a9-3cfa-466b-88d7-648aaab3ce2f", 00:11:20.447 "is_configured": false, 00:11:20.447 "data_offset": 0, 00:11:20.447 "data_size": 63488 00:11:20.447 }, 00:11:20.447 { 00:11:20.447 "name": "BaseBdev4", 00:11:20.447 "uuid": "06143ebe-f68a-404f-be20-65c2a8459b47", 00:11:20.447 "is_configured": true, 00:11:20.447 "data_offset": 2048, 00:11:20.447 "data_size": 63488 00:11:20.447 } 00:11:20.447 ] 00:11:20.447 }' 00:11:20.447 12:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.447 12:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.014 12:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:21.014 12:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.014 12:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.014 12:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.014 12:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.014 12:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:21.014 12:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:21.014 12:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.014 12:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.014 [2024-11-19 12:03:24.171783] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:21.014 12:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.014 12:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:21.014 12:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:21.014 12:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:21.014 12:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:21.014 12:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:21.014 12:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:21.015 12:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.015 12:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.015 12:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.015 12:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.015 12:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.015 12:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.015 12:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.015 12:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.015 12:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.015 12:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.015 "name": "Existed_Raid", 00:11:21.015 "uuid": "08cc0ffd-ad1f-4e10-8425-506245b4c14b", 00:11:21.015 "strip_size_kb": 64, 00:11:21.015 "state": "configuring", 00:11:21.015 "raid_level": "concat", 00:11:21.015 "superblock": true, 00:11:21.015 "num_base_bdevs": 4, 00:11:21.015 "num_base_bdevs_discovered": 3, 00:11:21.015 "num_base_bdevs_operational": 4, 00:11:21.015 "base_bdevs_list": [ 00:11:21.015 { 00:11:21.015 "name": "BaseBdev1", 00:11:21.015 "uuid": "e090bc4c-3d73-497c-a51a-674c4dfa7222", 00:11:21.015 "is_configured": true, 00:11:21.015 "data_offset": 2048, 00:11:21.015 "data_size": 63488 00:11:21.015 }, 00:11:21.015 { 00:11:21.015 "name": null, 00:11:21.015 "uuid": "4a01fdd9-337d-45c0-9325-ab692380ca48", 00:11:21.015 "is_configured": false, 00:11:21.015 "data_offset": 0, 00:11:21.015 "data_size": 63488 00:11:21.015 }, 00:11:21.015 { 00:11:21.015 "name": "BaseBdev3", 00:11:21.015 "uuid": "06e653a9-3cfa-466b-88d7-648aaab3ce2f", 00:11:21.015 "is_configured": true, 00:11:21.015 "data_offset": 2048, 00:11:21.015 "data_size": 63488 00:11:21.015 }, 00:11:21.015 { 00:11:21.015 "name": "BaseBdev4", 00:11:21.015 "uuid": "06143ebe-f68a-404f-be20-65c2a8459b47", 00:11:21.015 "is_configured": true, 00:11:21.015 "data_offset": 2048, 00:11:21.015 "data_size": 63488 00:11:21.015 } 00:11:21.015 ] 00:11:21.015 }' 00:11:21.015 12:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.015 12:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.274 12:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.274 12:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.274 12:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.274 12:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:21.533 12:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.533 12:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:21.533 12:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:21.533 12:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.533 12:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.533 [2024-11-19 12:03:24.687064] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:21.533 12:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.533 12:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:21.533 12:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:21.533 12:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:21.533 12:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:21.533 12:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:21.533 12:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:21.533 12:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.533 12:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.533 12:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.533 12:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.533 12:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.533 12:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.533 12:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.533 12:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.533 12:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.533 12:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.533 "name": "Existed_Raid", 00:11:21.533 "uuid": "08cc0ffd-ad1f-4e10-8425-506245b4c14b", 00:11:21.533 "strip_size_kb": 64, 00:11:21.533 "state": "configuring", 00:11:21.533 "raid_level": "concat", 00:11:21.533 "superblock": true, 00:11:21.533 "num_base_bdevs": 4, 00:11:21.533 "num_base_bdevs_discovered": 2, 00:11:21.533 "num_base_bdevs_operational": 4, 00:11:21.533 "base_bdevs_list": [ 00:11:21.533 { 00:11:21.533 "name": null, 00:11:21.533 "uuid": "e090bc4c-3d73-497c-a51a-674c4dfa7222", 00:11:21.533 "is_configured": false, 00:11:21.533 "data_offset": 0, 00:11:21.533 "data_size": 63488 00:11:21.533 }, 00:11:21.533 { 00:11:21.534 "name": null, 00:11:21.534 "uuid": "4a01fdd9-337d-45c0-9325-ab692380ca48", 00:11:21.534 "is_configured": false, 00:11:21.534 "data_offset": 0, 00:11:21.534 "data_size": 63488 00:11:21.534 }, 00:11:21.534 { 00:11:21.534 "name": "BaseBdev3", 00:11:21.534 "uuid": "06e653a9-3cfa-466b-88d7-648aaab3ce2f", 00:11:21.534 "is_configured": true, 00:11:21.534 "data_offset": 2048, 00:11:21.534 "data_size": 63488 00:11:21.534 }, 00:11:21.534 { 00:11:21.534 "name": "BaseBdev4", 00:11:21.534 "uuid": "06143ebe-f68a-404f-be20-65c2a8459b47", 00:11:21.534 "is_configured": true, 00:11:21.534 "data_offset": 2048, 00:11:21.534 "data_size": 63488 00:11:21.534 } 00:11:21.534 ] 00:11:21.534 }' 00:11:21.534 12:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.534 12:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.102 12:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.102 12:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:22.102 12:03:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.102 12:03:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.102 12:03:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.102 12:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:22.102 12:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:22.102 12:03:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.102 12:03:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.102 [2024-11-19 12:03:25.298646] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:22.102 12:03:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.102 12:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:22.102 12:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.102 12:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:22.102 12:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:22.102 12:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:22.102 12:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:22.102 12:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.102 12:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.102 12:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.102 12:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.102 12:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.102 12:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.102 12:03:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.102 12:03:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.102 12:03:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.102 12:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.102 "name": "Existed_Raid", 00:11:22.102 "uuid": "08cc0ffd-ad1f-4e10-8425-506245b4c14b", 00:11:22.102 "strip_size_kb": 64, 00:11:22.102 "state": "configuring", 00:11:22.102 "raid_level": "concat", 00:11:22.102 "superblock": true, 00:11:22.102 "num_base_bdevs": 4, 00:11:22.102 "num_base_bdevs_discovered": 3, 00:11:22.102 "num_base_bdevs_operational": 4, 00:11:22.102 "base_bdevs_list": [ 00:11:22.102 { 00:11:22.102 "name": null, 00:11:22.102 "uuid": "e090bc4c-3d73-497c-a51a-674c4dfa7222", 00:11:22.102 "is_configured": false, 00:11:22.102 "data_offset": 0, 00:11:22.102 "data_size": 63488 00:11:22.102 }, 00:11:22.102 { 00:11:22.102 "name": "BaseBdev2", 00:11:22.102 "uuid": "4a01fdd9-337d-45c0-9325-ab692380ca48", 00:11:22.102 "is_configured": true, 00:11:22.102 "data_offset": 2048, 00:11:22.102 "data_size": 63488 00:11:22.102 }, 00:11:22.102 { 00:11:22.102 "name": "BaseBdev3", 00:11:22.102 "uuid": "06e653a9-3cfa-466b-88d7-648aaab3ce2f", 00:11:22.102 "is_configured": true, 00:11:22.102 "data_offset": 2048, 00:11:22.102 "data_size": 63488 00:11:22.102 }, 00:11:22.102 { 00:11:22.102 "name": "BaseBdev4", 00:11:22.102 "uuid": "06143ebe-f68a-404f-be20-65c2a8459b47", 00:11:22.102 "is_configured": true, 00:11:22.102 "data_offset": 2048, 00:11:22.102 "data_size": 63488 00:11:22.102 } 00:11:22.102 ] 00:11:22.102 }' 00:11:22.102 12:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.102 12:03:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.670 12:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:22.670 12:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.670 12:03:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.670 12:03:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.670 12:03:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.670 12:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:22.670 12:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.670 12:03:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.670 12:03:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.670 12:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:22.670 12:03:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.670 12:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e090bc4c-3d73-497c-a51a-674c4dfa7222 00:11:22.670 12:03:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.670 12:03:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.670 [2024-11-19 12:03:25.858115] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:22.670 [2024-11-19 12:03:25.858437] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:22.670 [2024-11-19 12:03:25.858485] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:22.670 [2024-11-19 12:03:25.858779] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:22.670 [2024-11-19 12:03:25.858960] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:22.670 [2024-11-19 12:03:25.859019] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:22.670 NewBaseBdev 00:11:22.670 [2024-11-19 12:03:25.859279] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:22.670 12:03:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.670 12:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:22.670 12:03:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:22.670 12:03:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:22.670 12:03:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:22.670 12:03:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:22.670 12:03:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:22.670 12:03:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:22.670 12:03:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.670 12:03:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.670 12:03:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.670 12:03:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:22.670 12:03:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.670 12:03:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.670 [ 00:11:22.670 { 00:11:22.670 "name": "NewBaseBdev", 00:11:22.670 "aliases": [ 00:11:22.670 "e090bc4c-3d73-497c-a51a-674c4dfa7222" 00:11:22.670 ], 00:11:22.670 "product_name": "Malloc disk", 00:11:22.670 "block_size": 512, 00:11:22.670 "num_blocks": 65536, 00:11:22.670 "uuid": "e090bc4c-3d73-497c-a51a-674c4dfa7222", 00:11:22.670 "assigned_rate_limits": { 00:11:22.670 "rw_ios_per_sec": 0, 00:11:22.670 "rw_mbytes_per_sec": 0, 00:11:22.670 "r_mbytes_per_sec": 0, 00:11:22.670 "w_mbytes_per_sec": 0 00:11:22.670 }, 00:11:22.670 "claimed": true, 00:11:22.670 "claim_type": "exclusive_write", 00:11:22.670 "zoned": false, 00:11:22.670 "supported_io_types": { 00:11:22.670 "read": true, 00:11:22.670 "write": true, 00:11:22.670 "unmap": true, 00:11:22.670 "flush": true, 00:11:22.670 "reset": true, 00:11:22.670 "nvme_admin": false, 00:11:22.670 "nvme_io": false, 00:11:22.670 "nvme_io_md": false, 00:11:22.670 "write_zeroes": true, 00:11:22.670 "zcopy": true, 00:11:22.670 "get_zone_info": false, 00:11:22.670 "zone_management": false, 00:11:22.670 "zone_append": false, 00:11:22.670 "compare": false, 00:11:22.670 "compare_and_write": false, 00:11:22.670 "abort": true, 00:11:22.670 "seek_hole": false, 00:11:22.670 "seek_data": false, 00:11:22.670 "copy": true, 00:11:22.670 "nvme_iov_md": false 00:11:22.670 }, 00:11:22.670 "memory_domains": [ 00:11:22.670 { 00:11:22.670 "dma_device_id": "system", 00:11:22.670 "dma_device_type": 1 00:11:22.670 }, 00:11:22.670 { 00:11:22.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.670 "dma_device_type": 2 00:11:22.670 } 00:11:22.671 ], 00:11:22.671 "driver_specific": {} 00:11:22.671 } 00:11:22.671 ] 00:11:22.671 12:03:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.671 12:03:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:22.671 12:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:22.671 12:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.671 12:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:22.671 12:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:22.671 12:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:22.671 12:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:22.671 12:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.671 12:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.671 12:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.671 12:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.671 12:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.671 12:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.671 12:03:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.671 12:03:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.671 12:03:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.671 12:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.671 "name": "Existed_Raid", 00:11:22.671 "uuid": "08cc0ffd-ad1f-4e10-8425-506245b4c14b", 00:11:22.671 "strip_size_kb": 64, 00:11:22.671 "state": "online", 00:11:22.671 "raid_level": "concat", 00:11:22.671 "superblock": true, 00:11:22.671 "num_base_bdevs": 4, 00:11:22.671 "num_base_bdevs_discovered": 4, 00:11:22.671 "num_base_bdevs_operational": 4, 00:11:22.671 "base_bdevs_list": [ 00:11:22.671 { 00:11:22.671 "name": "NewBaseBdev", 00:11:22.671 "uuid": "e090bc4c-3d73-497c-a51a-674c4dfa7222", 00:11:22.671 "is_configured": true, 00:11:22.671 "data_offset": 2048, 00:11:22.671 "data_size": 63488 00:11:22.671 }, 00:11:22.671 { 00:11:22.671 "name": "BaseBdev2", 00:11:22.671 "uuid": "4a01fdd9-337d-45c0-9325-ab692380ca48", 00:11:22.671 "is_configured": true, 00:11:22.671 "data_offset": 2048, 00:11:22.671 "data_size": 63488 00:11:22.671 }, 00:11:22.671 { 00:11:22.671 "name": "BaseBdev3", 00:11:22.671 "uuid": "06e653a9-3cfa-466b-88d7-648aaab3ce2f", 00:11:22.671 "is_configured": true, 00:11:22.671 "data_offset": 2048, 00:11:22.671 "data_size": 63488 00:11:22.671 }, 00:11:22.671 { 00:11:22.671 "name": "BaseBdev4", 00:11:22.671 "uuid": "06143ebe-f68a-404f-be20-65c2a8459b47", 00:11:22.671 "is_configured": true, 00:11:22.671 "data_offset": 2048, 00:11:22.671 "data_size": 63488 00:11:22.671 } 00:11:22.671 ] 00:11:22.671 }' 00:11:22.671 12:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.671 12:03:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.238 12:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:23.238 12:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:23.238 12:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:23.239 12:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:23.239 12:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:23.239 12:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:23.239 12:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:23.239 12:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:23.239 12:03:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.239 12:03:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.239 [2024-11-19 12:03:26.389603] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:23.239 12:03:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.239 12:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:23.239 "name": "Existed_Raid", 00:11:23.239 "aliases": [ 00:11:23.239 "08cc0ffd-ad1f-4e10-8425-506245b4c14b" 00:11:23.239 ], 00:11:23.239 "product_name": "Raid Volume", 00:11:23.239 "block_size": 512, 00:11:23.239 "num_blocks": 253952, 00:11:23.239 "uuid": "08cc0ffd-ad1f-4e10-8425-506245b4c14b", 00:11:23.239 "assigned_rate_limits": { 00:11:23.239 "rw_ios_per_sec": 0, 00:11:23.239 "rw_mbytes_per_sec": 0, 00:11:23.239 "r_mbytes_per_sec": 0, 00:11:23.239 "w_mbytes_per_sec": 0 00:11:23.239 }, 00:11:23.239 "claimed": false, 00:11:23.239 "zoned": false, 00:11:23.239 "supported_io_types": { 00:11:23.239 "read": true, 00:11:23.239 "write": true, 00:11:23.239 "unmap": true, 00:11:23.239 "flush": true, 00:11:23.239 "reset": true, 00:11:23.239 "nvme_admin": false, 00:11:23.239 "nvme_io": false, 00:11:23.239 "nvme_io_md": false, 00:11:23.239 "write_zeroes": true, 00:11:23.239 "zcopy": false, 00:11:23.239 "get_zone_info": false, 00:11:23.239 "zone_management": false, 00:11:23.239 "zone_append": false, 00:11:23.239 "compare": false, 00:11:23.239 "compare_and_write": false, 00:11:23.239 "abort": false, 00:11:23.239 "seek_hole": false, 00:11:23.239 "seek_data": false, 00:11:23.239 "copy": false, 00:11:23.239 "nvme_iov_md": false 00:11:23.239 }, 00:11:23.239 "memory_domains": [ 00:11:23.239 { 00:11:23.239 "dma_device_id": "system", 00:11:23.239 "dma_device_type": 1 00:11:23.239 }, 00:11:23.239 { 00:11:23.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.239 "dma_device_type": 2 00:11:23.239 }, 00:11:23.239 { 00:11:23.239 "dma_device_id": "system", 00:11:23.239 "dma_device_type": 1 00:11:23.239 }, 00:11:23.239 { 00:11:23.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.239 "dma_device_type": 2 00:11:23.239 }, 00:11:23.239 { 00:11:23.239 "dma_device_id": "system", 00:11:23.239 "dma_device_type": 1 00:11:23.239 }, 00:11:23.239 { 00:11:23.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.239 "dma_device_type": 2 00:11:23.239 }, 00:11:23.239 { 00:11:23.239 "dma_device_id": "system", 00:11:23.239 "dma_device_type": 1 00:11:23.239 }, 00:11:23.239 { 00:11:23.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.239 "dma_device_type": 2 00:11:23.239 } 00:11:23.239 ], 00:11:23.239 "driver_specific": { 00:11:23.239 "raid": { 00:11:23.239 "uuid": "08cc0ffd-ad1f-4e10-8425-506245b4c14b", 00:11:23.239 "strip_size_kb": 64, 00:11:23.239 "state": "online", 00:11:23.239 "raid_level": "concat", 00:11:23.239 "superblock": true, 00:11:23.239 "num_base_bdevs": 4, 00:11:23.239 "num_base_bdevs_discovered": 4, 00:11:23.239 "num_base_bdevs_operational": 4, 00:11:23.239 "base_bdevs_list": [ 00:11:23.239 { 00:11:23.239 "name": "NewBaseBdev", 00:11:23.239 "uuid": "e090bc4c-3d73-497c-a51a-674c4dfa7222", 00:11:23.239 "is_configured": true, 00:11:23.239 "data_offset": 2048, 00:11:23.239 "data_size": 63488 00:11:23.239 }, 00:11:23.239 { 00:11:23.239 "name": "BaseBdev2", 00:11:23.239 "uuid": "4a01fdd9-337d-45c0-9325-ab692380ca48", 00:11:23.239 "is_configured": true, 00:11:23.239 "data_offset": 2048, 00:11:23.239 "data_size": 63488 00:11:23.239 }, 00:11:23.239 { 00:11:23.239 "name": "BaseBdev3", 00:11:23.239 "uuid": "06e653a9-3cfa-466b-88d7-648aaab3ce2f", 00:11:23.239 "is_configured": true, 00:11:23.239 "data_offset": 2048, 00:11:23.239 "data_size": 63488 00:11:23.239 }, 00:11:23.239 { 00:11:23.239 "name": "BaseBdev4", 00:11:23.239 "uuid": "06143ebe-f68a-404f-be20-65c2a8459b47", 00:11:23.239 "is_configured": true, 00:11:23.239 "data_offset": 2048, 00:11:23.239 "data_size": 63488 00:11:23.239 } 00:11:23.239 ] 00:11:23.239 } 00:11:23.239 } 00:11:23.239 }' 00:11:23.239 12:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:23.239 12:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:23.239 BaseBdev2 00:11:23.239 BaseBdev3 00:11:23.239 BaseBdev4' 00:11:23.239 12:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:23.239 12:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:23.239 12:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:23.239 12:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:23.239 12:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:23.239 12:03:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.239 12:03:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.239 12:03:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.239 12:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:23.239 12:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:23.239 12:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:23.239 12:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:23.239 12:03:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.239 12:03:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.239 12:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:23.239 12:03:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.239 12:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:23.239 12:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:23.239 12:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:23.239 12:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:23.239 12:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:23.239 12:03:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.239 12:03:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.239 12:03:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.498 12:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:23.498 12:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:23.498 12:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:23.498 12:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:23.498 12:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:23.498 12:03:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.498 12:03:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.498 12:03:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.498 12:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:23.498 12:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:23.498 12:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:23.498 12:03:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.498 12:03:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.498 [2024-11-19 12:03:26.684698] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:23.498 [2024-11-19 12:03:26.684727] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:23.498 [2024-11-19 12:03:26.684799] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:23.498 [2024-11-19 12:03:26.684869] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:23.498 [2024-11-19 12:03:26.684879] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:23.498 12:03:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.498 12:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 71965 00:11:23.498 12:03:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 71965 ']' 00:11:23.498 12:03:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 71965 00:11:23.498 12:03:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:23.498 12:03:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:23.498 12:03:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71965 00:11:23.498 12:03:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:23.498 12:03:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:23.498 12:03:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71965' 00:11:23.498 killing process with pid 71965 00:11:23.498 12:03:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 71965 00:11:23.498 [2024-11-19 12:03:26.729543] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:23.498 12:03:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 71965 00:11:23.757 [2024-11-19 12:03:27.125836] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:25.133 12:03:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:25.133 00:11:25.133 real 0m11.724s 00:11:25.133 user 0m18.644s 00:11:25.133 sys 0m2.103s 00:11:25.133 12:03:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:25.133 12:03:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.133 ************************************ 00:11:25.133 END TEST raid_state_function_test_sb 00:11:25.133 ************************************ 00:11:25.133 12:03:28 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:11:25.133 12:03:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:25.133 12:03:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:25.133 12:03:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:25.133 ************************************ 00:11:25.133 START TEST raid_superblock_test 00:11:25.133 ************************************ 00:11:25.133 12:03:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:11:25.133 12:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:11:25.133 12:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:25.133 12:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:25.133 12:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:25.133 12:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:25.133 12:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:25.133 12:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:25.133 12:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:25.133 12:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:25.133 12:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:25.133 12:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:25.133 12:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:25.133 12:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:25.133 12:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:11:25.133 12:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:25.133 12:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:25.133 12:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72634 00:11:25.133 12:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:25.133 12:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72634 00:11:25.133 12:03:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 72634 ']' 00:11:25.133 12:03:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:25.133 12:03:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:25.133 12:03:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:25.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:25.133 12:03:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:25.133 12:03:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.133 [2024-11-19 12:03:28.403174] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:11:25.133 [2024-11-19 12:03:28.403316] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72634 ] 00:11:25.391 [2024-11-19 12:03:28.586478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:25.391 [2024-11-19 12:03:28.701559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.651 [2024-11-19 12:03:28.902516] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:25.651 [2024-11-19 12:03:28.902571] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:25.910 12:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:25.910 12:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:25.910 12:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:25.910 12:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:25.910 12:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:25.910 12:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:25.910 12:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:25.910 12:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:25.910 12:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:25.910 12:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:25.910 12:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:25.910 12:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.910 12:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.910 malloc1 00:11:25.910 12:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.910 12:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:25.910 12:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.910 12:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.169 [2024-11-19 12:03:29.287267] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:26.169 [2024-11-19 12:03:29.287392] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:26.169 [2024-11-19 12:03:29.287442] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:26.169 [2024-11-19 12:03:29.287515] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:26.169 [2024-11-19 12:03:29.289838] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:26.169 [2024-11-19 12:03:29.289906] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:26.169 pt1 00:11:26.169 12:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.169 12:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:26.169 12:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:26.169 12:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:26.169 12:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:26.169 12:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:26.169 12:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:26.169 12:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:26.169 12:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:26.169 12:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:26.169 12:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.169 12:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.169 malloc2 00:11:26.169 12:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.169 12:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:26.169 12:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.169 12:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.169 [2024-11-19 12:03:29.346386] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:26.169 [2024-11-19 12:03:29.346489] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:26.169 [2024-11-19 12:03:29.346515] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:26.169 [2024-11-19 12:03:29.346541] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:26.169 [2024-11-19 12:03:29.348637] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:26.169 [2024-11-19 12:03:29.348673] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:26.169 pt2 00:11:26.169 12:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.169 12:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:26.169 12:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:26.169 12:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:26.169 12:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:26.169 12:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:26.169 12:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:26.169 12:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:26.169 12:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:26.169 12:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:26.169 12:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.169 12:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.169 malloc3 00:11:26.169 12:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.169 12:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:26.169 12:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.169 12:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.169 [2024-11-19 12:03:29.416019] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:26.169 [2024-11-19 12:03:29.416120] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:26.169 [2024-11-19 12:03:29.416164] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:26.169 [2024-11-19 12:03:29.416216] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:26.169 [2024-11-19 12:03:29.418495] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:26.169 [2024-11-19 12:03:29.418563] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:26.169 pt3 00:11:26.169 12:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.169 12:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:26.169 12:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:26.169 12:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:26.169 12:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:26.169 12:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:26.169 12:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:26.169 12:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:26.169 12:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:26.169 12:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:26.169 12:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.169 12:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.169 malloc4 00:11:26.169 12:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.169 12:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:26.169 12:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.169 12:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.169 [2024-11-19 12:03:29.475985] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:26.169 [2024-11-19 12:03:29.476091] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:26.169 [2024-11-19 12:03:29.476154] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:26.169 [2024-11-19 12:03:29.476189] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:26.169 [2024-11-19 12:03:29.478470] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:26.169 [2024-11-19 12:03:29.478537] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:26.170 pt4 00:11:26.170 12:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.170 12:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:26.170 12:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:26.170 12:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:26.170 12:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.170 12:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.170 [2024-11-19 12:03:29.488024] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:26.170 [2024-11-19 12:03:29.489888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:26.170 [2024-11-19 12:03:29.490003] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:26.170 [2024-11-19 12:03:29.490100] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:26.170 [2024-11-19 12:03:29.490328] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:26.170 [2024-11-19 12:03:29.490374] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:26.170 [2024-11-19 12:03:29.490628] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:26.170 [2024-11-19 12:03:29.490792] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:26.170 [2024-11-19 12:03:29.490805] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:26.170 [2024-11-19 12:03:29.490955] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:26.170 12:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.170 12:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:26.170 12:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:26.170 12:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:26.170 12:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:26.170 12:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:26.170 12:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:26.170 12:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.170 12:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.170 12:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.170 12:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.170 12:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.170 12:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.170 12:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.170 12:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:26.170 12:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.428 12:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.428 "name": "raid_bdev1", 00:11:26.428 "uuid": "7380edb0-e998-4486-abcc-3a61dbf0608c", 00:11:26.428 "strip_size_kb": 64, 00:11:26.428 "state": "online", 00:11:26.428 "raid_level": "concat", 00:11:26.428 "superblock": true, 00:11:26.428 "num_base_bdevs": 4, 00:11:26.428 "num_base_bdevs_discovered": 4, 00:11:26.428 "num_base_bdevs_operational": 4, 00:11:26.428 "base_bdevs_list": [ 00:11:26.428 { 00:11:26.428 "name": "pt1", 00:11:26.428 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:26.428 "is_configured": true, 00:11:26.428 "data_offset": 2048, 00:11:26.428 "data_size": 63488 00:11:26.428 }, 00:11:26.428 { 00:11:26.428 "name": "pt2", 00:11:26.428 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:26.428 "is_configured": true, 00:11:26.428 "data_offset": 2048, 00:11:26.428 "data_size": 63488 00:11:26.428 }, 00:11:26.428 { 00:11:26.428 "name": "pt3", 00:11:26.428 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:26.428 "is_configured": true, 00:11:26.428 "data_offset": 2048, 00:11:26.428 "data_size": 63488 00:11:26.428 }, 00:11:26.428 { 00:11:26.428 "name": "pt4", 00:11:26.428 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:26.428 "is_configured": true, 00:11:26.428 "data_offset": 2048, 00:11:26.428 "data_size": 63488 00:11:26.428 } 00:11:26.428 ] 00:11:26.428 }' 00:11:26.428 12:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.428 12:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.686 12:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:26.686 12:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:26.686 12:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:26.686 12:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:26.686 12:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:26.686 12:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:26.686 12:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:26.686 12:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.686 12:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.686 12:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:26.686 [2024-11-19 12:03:29.943527] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:26.686 12:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.686 12:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:26.686 "name": "raid_bdev1", 00:11:26.686 "aliases": [ 00:11:26.686 "7380edb0-e998-4486-abcc-3a61dbf0608c" 00:11:26.686 ], 00:11:26.686 "product_name": "Raid Volume", 00:11:26.686 "block_size": 512, 00:11:26.686 "num_blocks": 253952, 00:11:26.686 "uuid": "7380edb0-e998-4486-abcc-3a61dbf0608c", 00:11:26.686 "assigned_rate_limits": { 00:11:26.686 "rw_ios_per_sec": 0, 00:11:26.686 "rw_mbytes_per_sec": 0, 00:11:26.686 "r_mbytes_per_sec": 0, 00:11:26.686 "w_mbytes_per_sec": 0 00:11:26.686 }, 00:11:26.686 "claimed": false, 00:11:26.686 "zoned": false, 00:11:26.686 "supported_io_types": { 00:11:26.686 "read": true, 00:11:26.686 "write": true, 00:11:26.686 "unmap": true, 00:11:26.686 "flush": true, 00:11:26.686 "reset": true, 00:11:26.686 "nvme_admin": false, 00:11:26.686 "nvme_io": false, 00:11:26.686 "nvme_io_md": false, 00:11:26.686 "write_zeroes": true, 00:11:26.686 "zcopy": false, 00:11:26.686 "get_zone_info": false, 00:11:26.686 "zone_management": false, 00:11:26.686 "zone_append": false, 00:11:26.686 "compare": false, 00:11:26.686 "compare_and_write": false, 00:11:26.686 "abort": false, 00:11:26.686 "seek_hole": false, 00:11:26.686 "seek_data": false, 00:11:26.686 "copy": false, 00:11:26.686 "nvme_iov_md": false 00:11:26.686 }, 00:11:26.686 "memory_domains": [ 00:11:26.686 { 00:11:26.686 "dma_device_id": "system", 00:11:26.686 "dma_device_type": 1 00:11:26.686 }, 00:11:26.686 { 00:11:26.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.686 "dma_device_type": 2 00:11:26.686 }, 00:11:26.686 { 00:11:26.686 "dma_device_id": "system", 00:11:26.686 "dma_device_type": 1 00:11:26.686 }, 00:11:26.686 { 00:11:26.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.686 "dma_device_type": 2 00:11:26.686 }, 00:11:26.686 { 00:11:26.686 "dma_device_id": "system", 00:11:26.686 "dma_device_type": 1 00:11:26.686 }, 00:11:26.686 { 00:11:26.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.686 "dma_device_type": 2 00:11:26.686 }, 00:11:26.686 { 00:11:26.686 "dma_device_id": "system", 00:11:26.686 "dma_device_type": 1 00:11:26.686 }, 00:11:26.686 { 00:11:26.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.686 "dma_device_type": 2 00:11:26.686 } 00:11:26.686 ], 00:11:26.686 "driver_specific": { 00:11:26.686 "raid": { 00:11:26.686 "uuid": "7380edb0-e998-4486-abcc-3a61dbf0608c", 00:11:26.686 "strip_size_kb": 64, 00:11:26.686 "state": "online", 00:11:26.686 "raid_level": "concat", 00:11:26.686 "superblock": true, 00:11:26.686 "num_base_bdevs": 4, 00:11:26.686 "num_base_bdevs_discovered": 4, 00:11:26.686 "num_base_bdevs_operational": 4, 00:11:26.686 "base_bdevs_list": [ 00:11:26.686 { 00:11:26.686 "name": "pt1", 00:11:26.686 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:26.686 "is_configured": true, 00:11:26.686 "data_offset": 2048, 00:11:26.686 "data_size": 63488 00:11:26.686 }, 00:11:26.686 { 00:11:26.686 "name": "pt2", 00:11:26.686 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:26.686 "is_configured": true, 00:11:26.686 "data_offset": 2048, 00:11:26.686 "data_size": 63488 00:11:26.686 }, 00:11:26.686 { 00:11:26.686 "name": "pt3", 00:11:26.686 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:26.686 "is_configured": true, 00:11:26.686 "data_offset": 2048, 00:11:26.686 "data_size": 63488 00:11:26.686 }, 00:11:26.686 { 00:11:26.686 "name": "pt4", 00:11:26.686 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:26.686 "is_configured": true, 00:11:26.686 "data_offset": 2048, 00:11:26.687 "data_size": 63488 00:11:26.687 } 00:11:26.687 ] 00:11:26.687 } 00:11:26.687 } 00:11:26.687 }' 00:11:26.687 12:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:26.687 12:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:26.687 pt2 00:11:26.687 pt3 00:11:26.687 pt4' 00:11:26.687 12:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.945 12:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:26.945 12:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:26.945 12:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:26.945 12:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.945 12:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.945 12:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.945 12:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.945 12:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:26.945 12:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:26.945 12:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:26.945 12:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:26.945 12:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.945 12:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.945 12:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.945 12:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.945 12:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:26.945 12:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:26.945 12:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:26.945 12:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.945 12:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:26.946 12:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.946 12:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.946 12:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.946 12:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:26.946 12:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:26.946 12:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:26.946 12:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:26.946 12:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.946 12:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.946 12:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.946 12:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.946 12:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:26.946 12:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:26.946 12:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:26.946 12:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:26.946 12:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.946 12:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.946 [2024-11-19 12:03:30.266875] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:26.946 12:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.946 12:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=7380edb0-e998-4486-abcc-3a61dbf0608c 00:11:26.946 12:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 7380edb0-e998-4486-abcc-3a61dbf0608c ']' 00:11:26.946 12:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:26.946 12:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.946 12:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.946 [2024-11-19 12:03:30.314519] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:26.946 [2024-11-19 12:03:30.314582] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:26.946 [2024-11-19 12:03:30.314678] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:26.946 [2024-11-19 12:03:30.314782] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:26.946 [2024-11-19 12:03:30.314830] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:26.946 12:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.205 12:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.205 12:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:27.205 12:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.205 12:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.205 12:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.205 12:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:27.205 12:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:27.205 12:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:27.205 12:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:27.205 12:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.205 12:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.205 12:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.205 12:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:27.205 12:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:27.205 12:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.205 12:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.205 12:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.205 12:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:27.205 12:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:27.205 12:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.205 12:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.205 12:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.205 12:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:27.205 12:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:27.205 12:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.205 12:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.205 12:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.205 12:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:27.205 12:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:27.205 12:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.205 12:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.205 12:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.205 12:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:27.205 12:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:27.205 12:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:27.205 12:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:27.205 12:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:27.205 12:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:27.205 12:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:27.205 12:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:27.205 12:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:27.205 12:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.205 12:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.205 [2024-11-19 12:03:30.470295] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:27.205 [2024-11-19 12:03:30.472230] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:27.205 [2024-11-19 12:03:30.472315] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:27.205 [2024-11-19 12:03:30.472394] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:27.205 [2024-11-19 12:03:30.472485] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:27.205 [2024-11-19 12:03:30.472586] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:27.205 [2024-11-19 12:03:30.472641] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:27.205 [2024-11-19 12:03:30.472664] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:27.205 [2024-11-19 12:03:30.472677] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:27.205 [2024-11-19 12:03:30.472688] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:27.205 request: 00:11:27.205 { 00:11:27.205 "name": "raid_bdev1", 00:11:27.205 "raid_level": "concat", 00:11:27.205 "base_bdevs": [ 00:11:27.205 "malloc1", 00:11:27.205 "malloc2", 00:11:27.205 "malloc3", 00:11:27.205 "malloc4" 00:11:27.205 ], 00:11:27.205 "strip_size_kb": 64, 00:11:27.205 "superblock": false, 00:11:27.205 "method": "bdev_raid_create", 00:11:27.205 "req_id": 1 00:11:27.205 } 00:11:27.205 Got JSON-RPC error response 00:11:27.205 response: 00:11:27.205 { 00:11:27.205 "code": -17, 00:11:27.205 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:27.206 } 00:11:27.206 12:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:27.206 12:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:27.206 12:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:27.206 12:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:27.206 12:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:27.206 12:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:27.206 12:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.206 12:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.206 12:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.206 12:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.206 12:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:27.206 12:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:27.206 12:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:27.206 12:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.206 12:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.206 [2024-11-19 12:03:30.530166] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:27.206 [2024-11-19 12:03:30.530264] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:27.206 [2024-11-19 12:03:30.530315] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:27.206 [2024-11-19 12:03:30.530353] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:27.206 [2024-11-19 12:03:30.532734] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:27.206 [2024-11-19 12:03:30.532808] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:27.206 [2024-11-19 12:03:30.532927] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:27.206 [2024-11-19 12:03:30.533037] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:27.206 pt1 00:11:27.206 12:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.206 12:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:27.206 12:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:27.206 12:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:27.206 12:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:27.206 12:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:27.206 12:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:27.206 12:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.206 12:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.206 12:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.206 12:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.206 12:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:27.206 12:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.206 12:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.206 12:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.206 12:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.206 12:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.206 "name": "raid_bdev1", 00:11:27.206 "uuid": "7380edb0-e998-4486-abcc-3a61dbf0608c", 00:11:27.206 "strip_size_kb": 64, 00:11:27.206 "state": "configuring", 00:11:27.206 "raid_level": "concat", 00:11:27.206 "superblock": true, 00:11:27.206 "num_base_bdevs": 4, 00:11:27.206 "num_base_bdevs_discovered": 1, 00:11:27.206 "num_base_bdevs_operational": 4, 00:11:27.206 "base_bdevs_list": [ 00:11:27.206 { 00:11:27.206 "name": "pt1", 00:11:27.206 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:27.206 "is_configured": true, 00:11:27.206 "data_offset": 2048, 00:11:27.206 "data_size": 63488 00:11:27.206 }, 00:11:27.206 { 00:11:27.206 "name": null, 00:11:27.206 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:27.206 "is_configured": false, 00:11:27.206 "data_offset": 2048, 00:11:27.206 "data_size": 63488 00:11:27.206 }, 00:11:27.206 { 00:11:27.206 "name": null, 00:11:27.206 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:27.206 "is_configured": false, 00:11:27.206 "data_offset": 2048, 00:11:27.206 "data_size": 63488 00:11:27.206 }, 00:11:27.206 { 00:11:27.206 "name": null, 00:11:27.206 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:27.206 "is_configured": false, 00:11:27.206 "data_offset": 2048, 00:11:27.206 "data_size": 63488 00:11:27.206 } 00:11:27.206 ] 00:11:27.206 }' 00:11:27.206 12:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.206 12:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.796 12:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:27.796 12:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:27.796 12:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.796 12:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.796 [2024-11-19 12:03:30.965434] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:27.796 [2024-11-19 12:03:30.965543] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:27.796 [2024-11-19 12:03:30.965579] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:27.796 [2024-11-19 12:03:30.965608] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:27.796 [2024-11-19 12:03:30.966099] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:27.796 [2024-11-19 12:03:30.966159] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:27.797 [2024-11-19 12:03:30.966281] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:27.797 [2024-11-19 12:03:30.966333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:27.797 pt2 00:11:27.797 12:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.797 12:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:27.797 12:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.797 12:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.797 [2024-11-19 12:03:30.977413] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:27.797 12:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.797 12:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:27.797 12:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:27.797 12:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:27.797 12:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:27.797 12:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:27.797 12:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:27.797 12:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.797 12:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.797 12:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.797 12:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.797 12:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.797 12:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:27.797 12:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.797 12:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.797 12:03:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.797 12:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.797 "name": "raid_bdev1", 00:11:27.797 "uuid": "7380edb0-e998-4486-abcc-3a61dbf0608c", 00:11:27.797 "strip_size_kb": 64, 00:11:27.797 "state": "configuring", 00:11:27.797 "raid_level": "concat", 00:11:27.797 "superblock": true, 00:11:27.797 "num_base_bdevs": 4, 00:11:27.797 "num_base_bdevs_discovered": 1, 00:11:27.797 "num_base_bdevs_operational": 4, 00:11:27.797 "base_bdevs_list": [ 00:11:27.797 { 00:11:27.797 "name": "pt1", 00:11:27.797 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:27.797 "is_configured": true, 00:11:27.797 "data_offset": 2048, 00:11:27.797 "data_size": 63488 00:11:27.797 }, 00:11:27.797 { 00:11:27.797 "name": null, 00:11:27.797 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:27.797 "is_configured": false, 00:11:27.797 "data_offset": 0, 00:11:27.797 "data_size": 63488 00:11:27.797 }, 00:11:27.797 { 00:11:27.797 "name": null, 00:11:27.797 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:27.797 "is_configured": false, 00:11:27.797 "data_offset": 2048, 00:11:27.797 "data_size": 63488 00:11:27.797 }, 00:11:27.797 { 00:11:27.797 "name": null, 00:11:27.797 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:27.797 "is_configured": false, 00:11:27.797 "data_offset": 2048, 00:11:27.797 "data_size": 63488 00:11:27.797 } 00:11:27.797 ] 00:11:27.797 }' 00:11:27.797 12:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.797 12:03:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.074 12:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:28.074 12:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:28.074 12:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:28.074 12:03:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.074 12:03:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.074 [2024-11-19 12:03:31.444638] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:28.074 [2024-11-19 12:03:31.444750] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:28.074 [2024-11-19 12:03:31.444800] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:28.074 [2024-11-19 12:03:31.444835] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:28.074 [2024-11-19 12:03:31.445386] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:28.074 [2024-11-19 12:03:31.445449] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:28.074 [2024-11-19 12:03:31.445576] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:28.074 [2024-11-19 12:03:31.445630] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:28.074 pt2 00:11:28.074 12:03:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.074 12:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:28.074 12:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:28.333 12:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:28.333 12:03:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.333 12:03:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.333 [2024-11-19 12:03:31.452586] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:28.333 [2024-11-19 12:03:31.452690] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:28.333 [2024-11-19 12:03:31.452738] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:28.333 [2024-11-19 12:03:31.452781] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:28.333 [2024-11-19 12:03:31.453262] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:28.334 [2024-11-19 12:03:31.453325] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:28.334 [2024-11-19 12:03:31.453427] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:28.334 [2024-11-19 12:03:31.453479] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:28.334 pt3 00:11:28.334 12:03:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.334 12:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:28.334 12:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:28.334 12:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:28.334 12:03:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.334 12:03:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.334 [2024-11-19 12:03:31.460541] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:28.334 [2024-11-19 12:03:31.460619] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:28.334 [2024-11-19 12:03:31.460675] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:28.334 [2024-11-19 12:03:31.460709] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:28.334 [2024-11-19 12:03:31.461126] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:28.334 [2024-11-19 12:03:31.461180] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:28.334 [2024-11-19 12:03:31.461252] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:28.334 [2024-11-19 12:03:31.461274] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:28.334 [2024-11-19 12:03:31.461428] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:28.334 [2024-11-19 12:03:31.461437] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:28.334 [2024-11-19 12:03:31.461697] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:28.334 [2024-11-19 12:03:31.461852] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:28.334 [2024-11-19 12:03:31.461865] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:28.334 [2024-11-19 12:03:31.462017] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:28.334 pt4 00:11:28.334 12:03:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.334 12:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:28.334 12:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:28.334 12:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:28.334 12:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:28.334 12:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:28.334 12:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:28.334 12:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:28.334 12:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:28.334 12:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.334 12:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.334 12:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.334 12:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.334 12:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:28.334 12:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.334 12:03:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.334 12:03:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.334 12:03:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.334 12:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.334 "name": "raid_bdev1", 00:11:28.334 "uuid": "7380edb0-e998-4486-abcc-3a61dbf0608c", 00:11:28.334 "strip_size_kb": 64, 00:11:28.334 "state": "online", 00:11:28.334 "raid_level": "concat", 00:11:28.334 "superblock": true, 00:11:28.334 "num_base_bdevs": 4, 00:11:28.334 "num_base_bdevs_discovered": 4, 00:11:28.334 "num_base_bdevs_operational": 4, 00:11:28.334 "base_bdevs_list": [ 00:11:28.334 { 00:11:28.334 "name": "pt1", 00:11:28.334 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:28.334 "is_configured": true, 00:11:28.334 "data_offset": 2048, 00:11:28.334 "data_size": 63488 00:11:28.334 }, 00:11:28.334 { 00:11:28.334 "name": "pt2", 00:11:28.334 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:28.334 "is_configured": true, 00:11:28.334 "data_offset": 2048, 00:11:28.334 "data_size": 63488 00:11:28.334 }, 00:11:28.334 { 00:11:28.334 "name": "pt3", 00:11:28.334 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:28.334 "is_configured": true, 00:11:28.334 "data_offset": 2048, 00:11:28.334 "data_size": 63488 00:11:28.334 }, 00:11:28.334 { 00:11:28.334 "name": "pt4", 00:11:28.334 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:28.334 "is_configured": true, 00:11:28.334 "data_offset": 2048, 00:11:28.334 "data_size": 63488 00:11:28.334 } 00:11:28.334 ] 00:11:28.334 }' 00:11:28.334 12:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.334 12:03:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.593 12:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:28.593 12:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:28.593 12:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:28.593 12:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:28.593 12:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:28.593 12:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:28.593 12:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:28.593 12:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:28.593 12:03:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.593 12:03:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.593 [2024-11-19 12:03:31.916276] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:28.593 12:03:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.593 12:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:28.593 "name": "raid_bdev1", 00:11:28.593 "aliases": [ 00:11:28.593 "7380edb0-e998-4486-abcc-3a61dbf0608c" 00:11:28.593 ], 00:11:28.593 "product_name": "Raid Volume", 00:11:28.593 "block_size": 512, 00:11:28.593 "num_blocks": 253952, 00:11:28.594 "uuid": "7380edb0-e998-4486-abcc-3a61dbf0608c", 00:11:28.594 "assigned_rate_limits": { 00:11:28.594 "rw_ios_per_sec": 0, 00:11:28.594 "rw_mbytes_per_sec": 0, 00:11:28.594 "r_mbytes_per_sec": 0, 00:11:28.594 "w_mbytes_per_sec": 0 00:11:28.594 }, 00:11:28.594 "claimed": false, 00:11:28.594 "zoned": false, 00:11:28.594 "supported_io_types": { 00:11:28.594 "read": true, 00:11:28.594 "write": true, 00:11:28.594 "unmap": true, 00:11:28.594 "flush": true, 00:11:28.594 "reset": true, 00:11:28.594 "nvme_admin": false, 00:11:28.594 "nvme_io": false, 00:11:28.594 "nvme_io_md": false, 00:11:28.594 "write_zeroes": true, 00:11:28.594 "zcopy": false, 00:11:28.594 "get_zone_info": false, 00:11:28.594 "zone_management": false, 00:11:28.594 "zone_append": false, 00:11:28.594 "compare": false, 00:11:28.594 "compare_and_write": false, 00:11:28.594 "abort": false, 00:11:28.594 "seek_hole": false, 00:11:28.594 "seek_data": false, 00:11:28.594 "copy": false, 00:11:28.594 "nvme_iov_md": false 00:11:28.594 }, 00:11:28.594 "memory_domains": [ 00:11:28.594 { 00:11:28.594 "dma_device_id": "system", 00:11:28.594 "dma_device_type": 1 00:11:28.594 }, 00:11:28.594 { 00:11:28.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.594 "dma_device_type": 2 00:11:28.594 }, 00:11:28.594 { 00:11:28.594 "dma_device_id": "system", 00:11:28.594 "dma_device_type": 1 00:11:28.594 }, 00:11:28.594 { 00:11:28.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.594 "dma_device_type": 2 00:11:28.594 }, 00:11:28.594 { 00:11:28.594 "dma_device_id": "system", 00:11:28.594 "dma_device_type": 1 00:11:28.594 }, 00:11:28.594 { 00:11:28.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.594 "dma_device_type": 2 00:11:28.594 }, 00:11:28.594 { 00:11:28.594 "dma_device_id": "system", 00:11:28.594 "dma_device_type": 1 00:11:28.594 }, 00:11:28.594 { 00:11:28.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.594 "dma_device_type": 2 00:11:28.594 } 00:11:28.594 ], 00:11:28.594 "driver_specific": { 00:11:28.594 "raid": { 00:11:28.594 "uuid": "7380edb0-e998-4486-abcc-3a61dbf0608c", 00:11:28.594 "strip_size_kb": 64, 00:11:28.594 "state": "online", 00:11:28.594 "raid_level": "concat", 00:11:28.594 "superblock": true, 00:11:28.594 "num_base_bdevs": 4, 00:11:28.594 "num_base_bdevs_discovered": 4, 00:11:28.594 "num_base_bdevs_operational": 4, 00:11:28.594 "base_bdevs_list": [ 00:11:28.594 { 00:11:28.594 "name": "pt1", 00:11:28.594 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:28.594 "is_configured": true, 00:11:28.594 "data_offset": 2048, 00:11:28.594 "data_size": 63488 00:11:28.594 }, 00:11:28.594 { 00:11:28.594 "name": "pt2", 00:11:28.594 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:28.594 "is_configured": true, 00:11:28.594 "data_offset": 2048, 00:11:28.594 "data_size": 63488 00:11:28.594 }, 00:11:28.594 { 00:11:28.594 "name": "pt3", 00:11:28.594 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:28.594 "is_configured": true, 00:11:28.594 "data_offset": 2048, 00:11:28.594 "data_size": 63488 00:11:28.594 }, 00:11:28.594 { 00:11:28.594 "name": "pt4", 00:11:28.594 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:28.594 "is_configured": true, 00:11:28.594 "data_offset": 2048, 00:11:28.594 "data_size": 63488 00:11:28.594 } 00:11:28.594 ] 00:11:28.594 } 00:11:28.594 } 00:11:28.594 }' 00:11:28.594 12:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:28.853 12:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:28.853 pt2 00:11:28.853 pt3 00:11:28.853 pt4' 00:11:28.853 12:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:28.853 12:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:28.853 12:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:28.853 12:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:28.853 12:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:28.853 12:03:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.853 12:03:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.853 12:03:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.853 12:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:28.853 12:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:28.853 12:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:28.853 12:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:28.853 12:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:28.853 12:03:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.853 12:03:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.853 12:03:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.853 12:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:28.853 12:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:28.853 12:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:28.853 12:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:28.853 12:03:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.853 12:03:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.853 12:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:28.853 12:03:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.853 12:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:28.853 12:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:28.853 12:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:28.853 12:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:28.853 12:03:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.853 12:03:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.853 12:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:28.853 12:03:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.112 12:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:29.112 12:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:29.112 12:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:29.112 12:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:29.112 12:03:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.112 12:03:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.112 [2024-11-19 12:03:32.239633] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:29.112 12:03:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.112 12:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 7380edb0-e998-4486-abcc-3a61dbf0608c '!=' 7380edb0-e998-4486-abcc-3a61dbf0608c ']' 00:11:29.112 12:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:11:29.112 12:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:29.112 12:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:29.112 12:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72634 00:11:29.112 12:03:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 72634 ']' 00:11:29.112 12:03:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 72634 00:11:29.112 12:03:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:29.112 12:03:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:29.112 12:03:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72634 00:11:29.112 killing process with pid 72634 00:11:29.113 12:03:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:29.113 12:03:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:29.113 12:03:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72634' 00:11:29.113 12:03:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 72634 00:11:29.113 [2024-11-19 12:03:32.318907] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:29.113 [2024-11-19 12:03:32.318983] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:29.113 [2024-11-19 12:03:32.319087] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:29.113 [2024-11-19 12:03:32.319097] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:29.113 12:03:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 72634 00:11:29.371 [2024-11-19 12:03:32.721625] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:30.748 12:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:30.748 00:11:30.748 real 0m5.551s 00:11:30.748 user 0m7.939s 00:11:30.748 sys 0m0.946s 00:11:30.748 ************************************ 00:11:30.748 END TEST raid_superblock_test 00:11:30.748 ************************************ 00:11:30.748 12:03:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:30.748 12:03:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.748 12:03:33 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:11:30.748 12:03:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:30.748 12:03:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:30.748 12:03:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:30.748 ************************************ 00:11:30.748 START TEST raid_read_error_test 00:11:30.748 ************************************ 00:11:30.748 12:03:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:11:30.748 12:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:30.748 12:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:30.748 12:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:30.748 12:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:30.748 12:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:30.748 12:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:30.748 12:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:30.748 12:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:30.748 12:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:30.748 12:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:30.748 12:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:30.748 12:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:30.748 12:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:30.748 12:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:30.748 12:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:30.748 12:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:30.748 12:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:30.748 12:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:30.748 12:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:30.748 12:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:30.748 12:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:30.748 12:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:30.748 12:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:30.748 12:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:30.748 12:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:30.748 12:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:30.748 12:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:30.748 12:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:30.748 12:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.4GUtgd6dPV 00:11:30.748 12:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72899 00:11:30.748 12:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:30.748 12:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72899 00:11:30.748 12:03:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 72899 ']' 00:11:30.748 12:03:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:30.748 12:03:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:30.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:30.748 12:03:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:30.748 12:03:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:30.748 12:03:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.748 [2024-11-19 12:03:34.045271] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:11:30.748 [2024-11-19 12:03:34.045400] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72899 ] 00:11:31.009 [2024-11-19 12:03:34.225736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:31.009 [2024-11-19 12:03:34.346844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.268 [2024-11-19 12:03:34.541408] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:31.268 [2024-11-19 12:03:34.541466] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:31.527 12:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:31.527 12:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:31.528 12:03:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:31.528 12:03:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:31.528 12:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.528 12:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.528 BaseBdev1_malloc 00:11:31.528 12:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.528 12:03:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:31.528 12:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.528 12:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.787 true 00:11:31.787 12:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.787 12:03:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:31.787 12:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.787 12:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.787 [2024-11-19 12:03:34.915100] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:31.787 [2024-11-19 12:03:34.915214] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:31.787 [2024-11-19 12:03:34.915257] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:31.787 [2024-11-19 12:03:34.915293] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:31.787 [2024-11-19 12:03:34.917438] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:31.787 [2024-11-19 12:03:34.917511] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:31.787 BaseBdev1 00:11:31.787 12:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.787 12:03:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:31.787 12:03:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:31.787 12:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.787 12:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.787 BaseBdev2_malloc 00:11:31.787 12:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.787 12:03:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:31.787 12:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.787 12:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.787 true 00:11:31.787 12:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.787 12:03:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:31.787 12:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.787 12:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.787 [2024-11-19 12:03:34.982771] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:31.787 [2024-11-19 12:03:34.982827] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:31.787 [2024-11-19 12:03:34.982843] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:31.787 [2024-11-19 12:03:34.982853] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:31.787 [2024-11-19 12:03:34.984925] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:31.787 [2024-11-19 12:03:34.984962] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:31.787 BaseBdev2 00:11:31.787 12:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.787 12:03:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:31.787 12:03:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:31.787 12:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.787 12:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.787 BaseBdev3_malloc 00:11:31.787 12:03:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.787 12:03:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:31.787 12:03:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.788 12:03:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.788 true 00:11:31.788 12:03:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.788 12:03:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:31.788 12:03:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.788 12:03:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.788 [2024-11-19 12:03:35.069245] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:31.788 [2024-11-19 12:03:35.069364] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:31.788 [2024-11-19 12:03:35.069383] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:31.788 [2024-11-19 12:03:35.069394] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:31.788 [2024-11-19 12:03:35.071374] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:31.788 [2024-11-19 12:03:35.071413] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:31.788 BaseBdev3 00:11:31.788 12:03:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.788 12:03:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:31.788 12:03:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:31.788 12:03:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.788 12:03:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.788 BaseBdev4_malloc 00:11:31.788 12:03:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.788 12:03:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:31.788 12:03:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.788 12:03:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.788 true 00:11:31.788 12:03:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.788 12:03:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:31.788 12:03:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.788 12:03:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.788 [2024-11-19 12:03:35.137751] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:31.788 [2024-11-19 12:03:35.137860] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:31.788 [2024-11-19 12:03:35.137909] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:31.788 [2024-11-19 12:03:35.137947] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:31.788 [2024-11-19 12:03:35.140050] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:31.788 [2024-11-19 12:03:35.140120] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:31.788 BaseBdev4 00:11:31.788 12:03:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.788 12:03:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:31.788 12:03:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.788 12:03:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.788 [2024-11-19 12:03:35.149786] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:31.788 [2024-11-19 12:03:35.151544] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:31.788 [2024-11-19 12:03:35.151675] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:31.788 [2024-11-19 12:03:35.151777] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:31.788 [2024-11-19 12:03:35.152045] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:31.788 [2024-11-19 12:03:35.152095] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:31.788 [2024-11-19 12:03:35.152357] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:31.788 [2024-11-19 12:03:35.152551] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:31.788 [2024-11-19 12:03:35.152592] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:31.788 [2024-11-19 12:03:35.152770] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:31.788 12:03:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.788 12:03:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:31.788 12:03:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:31.788 12:03:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:31.788 12:03:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:31.788 12:03:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:31.788 12:03:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:31.788 12:03:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.788 12:03:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.788 12:03:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.788 12:03:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.048 12:03:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.048 12:03:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:32.048 12:03:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.048 12:03:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.048 12:03:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.048 12:03:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.048 "name": "raid_bdev1", 00:11:32.048 "uuid": "1a6ff407-d7e1-420e-861a-8f4a4418eac4", 00:11:32.048 "strip_size_kb": 64, 00:11:32.048 "state": "online", 00:11:32.048 "raid_level": "concat", 00:11:32.048 "superblock": true, 00:11:32.048 "num_base_bdevs": 4, 00:11:32.048 "num_base_bdevs_discovered": 4, 00:11:32.048 "num_base_bdevs_operational": 4, 00:11:32.048 "base_bdevs_list": [ 00:11:32.048 { 00:11:32.048 "name": "BaseBdev1", 00:11:32.048 "uuid": "6fabe83a-cb42-5016-b6e0-739d4501481a", 00:11:32.048 "is_configured": true, 00:11:32.048 "data_offset": 2048, 00:11:32.048 "data_size": 63488 00:11:32.048 }, 00:11:32.048 { 00:11:32.048 "name": "BaseBdev2", 00:11:32.048 "uuid": "c8b10f66-2721-5043-b875-febbdae2d038", 00:11:32.048 "is_configured": true, 00:11:32.048 "data_offset": 2048, 00:11:32.048 "data_size": 63488 00:11:32.048 }, 00:11:32.048 { 00:11:32.048 "name": "BaseBdev3", 00:11:32.048 "uuid": "fbf5a0d5-0f9c-5c62-a65c-939c586307ee", 00:11:32.048 "is_configured": true, 00:11:32.048 "data_offset": 2048, 00:11:32.048 "data_size": 63488 00:11:32.048 }, 00:11:32.048 { 00:11:32.048 "name": "BaseBdev4", 00:11:32.048 "uuid": "9d088943-7def-50ac-90f5-e931a8b92fd3", 00:11:32.048 "is_configured": true, 00:11:32.048 "data_offset": 2048, 00:11:32.048 "data_size": 63488 00:11:32.048 } 00:11:32.048 ] 00:11:32.048 }' 00:11:32.048 12:03:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.048 12:03:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.309 12:03:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:32.309 12:03:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:32.570 [2024-11-19 12:03:35.734162] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:33.510 12:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:33.510 12:03:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.510 12:03:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.510 12:03:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.510 12:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:33.510 12:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:33.510 12:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:33.510 12:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:33.510 12:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:33.510 12:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:33.510 12:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:33.510 12:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:33.510 12:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:33.510 12:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.510 12:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.510 12:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.510 12:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.510 12:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.510 12:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:33.510 12:03:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.510 12:03:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.510 12:03:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.510 12:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.510 "name": "raid_bdev1", 00:11:33.510 "uuid": "1a6ff407-d7e1-420e-861a-8f4a4418eac4", 00:11:33.510 "strip_size_kb": 64, 00:11:33.510 "state": "online", 00:11:33.510 "raid_level": "concat", 00:11:33.510 "superblock": true, 00:11:33.510 "num_base_bdevs": 4, 00:11:33.510 "num_base_bdevs_discovered": 4, 00:11:33.510 "num_base_bdevs_operational": 4, 00:11:33.510 "base_bdevs_list": [ 00:11:33.510 { 00:11:33.510 "name": "BaseBdev1", 00:11:33.510 "uuid": "6fabe83a-cb42-5016-b6e0-739d4501481a", 00:11:33.510 "is_configured": true, 00:11:33.510 "data_offset": 2048, 00:11:33.510 "data_size": 63488 00:11:33.510 }, 00:11:33.510 { 00:11:33.510 "name": "BaseBdev2", 00:11:33.510 "uuid": "c8b10f66-2721-5043-b875-febbdae2d038", 00:11:33.510 "is_configured": true, 00:11:33.510 "data_offset": 2048, 00:11:33.510 "data_size": 63488 00:11:33.510 }, 00:11:33.510 { 00:11:33.510 "name": "BaseBdev3", 00:11:33.510 "uuid": "fbf5a0d5-0f9c-5c62-a65c-939c586307ee", 00:11:33.510 "is_configured": true, 00:11:33.510 "data_offset": 2048, 00:11:33.510 "data_size": 63488 00:11:33.510 }, 00:11:33.510 { 00:11:33.510 "name": "BaseBdev4", 00:11:33.510 "uuid": "9d088943-7def-50ac-90f5-e931a8b92fd3", 00:11:33.510 "is_configured": true, 00:11:33.510 "data_offset": 2048, 00:11:33.510 "data_size": 63488 00:11:33.510 } 00:11:33.510 ] 00:11:33.510 }' 00:11:33.510 12:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.510 12:03:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.770 12:03:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:33.770 12:03:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.770 12:03:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.770 [2024-11-19 12:03:37.121982] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:33.770 [2024-11-19 12:03:37.122082] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:33.770 [2024-11-19 12:03:37.124823] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:33.770 [2024-11-19 12:03:37.124921] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:33.770 [2024-11-19 12:03:37.124983] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:33.770 [2024-11-19 12:03:37.125044] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:33.770 { 00:11:33.770 "results": [ 00:11:33.770 { 00:11:33.770 "job": "raid_bdev1", 00:11:33.770 "core_mask": "0x1", 00:11:33.770 "workload": "randrw", 00:11:33.770 "percentage": 50, 00:11:33.770 "status": "finished", 00:11:33.770 "queue_depth": 1, 00:11:33.770 "io_size": 131072, 00:11:33.770 "runtime": 1.388845, 00:11:33.770 "iops": 15906.022630315118, 00:11:33.770 "mibps": 1988.2528287893897, 00:11:33.770 "io_failed": 1, 00:11:33.770 "io_timeout": 0, 00:11:33.770 "avg_latency_us": 87.45774296767706, 00:11:33.770 "min_latency_us": 25.7117903930131, 00:11:33.770 "max_latency_us": 1366.5257641921398 00:11:33.770 } 00:11:33.770 ], 00:11:33.770 "core_count": 1 00:11:33.770 } 00:11:33.770 12:03:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.770 12:03:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72899 00:11:33.770 12:03:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 72899 ']' 00:11:33.770 12:03:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 72899 00:11:33.770 12:03:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:33.770 12:03:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:33.770 12:03:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72899 00:11:34.030 12:03:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:34.030 killing process with pid 72899 00:11:34.030 12:03:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:34.030 12:03:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72899' 00:11:34.030 12:03:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 72899 00:11:34.030 [2024-11-19 12:03:37.171990] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:34.030 12:03:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 72899 00:11:34.290 [2024-11-19 12:03:37.499763] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:35.672 12:03:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.4GUtgd6dPV 00:11:35.672 12:03:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:35.672 12:03:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:35.672 12:03:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:11:35.672 ************************************ 00:11:35.672 END TEST raid_read_error_test 00:11:35.672 ************************************ 00:11:35.672 12:03:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:35.672 12:03:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:35.672 12:03:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:35.672 12:03:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:11:35.672 00:11:35.672 real 0m4.720s 00:11:35.672 user 0m5.578s 00:11:35.672 sys 0m0.602s 00:11:35.672 12:03:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:35.672 12:03:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.672 12:03:38 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:11:35.672 12:03:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:35.672 12:03:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:35.672 12:03:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:35.672 ************************************ 00:11:35.672 START TEST raid_write_error_test 00:11:35.672 ************************************ 00:11:35.672 12:03:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:11:35.672 12:03:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:35.672 12:03:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:35.672 12:03:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:35.672 12:03:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:35.672 12:03:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:35.672 12:03:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:35.672 12:03:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:35.672 12:03:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:35.672 12:03:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:35.672 12:03:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:35.672 12:03:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:35.672 12:03:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:35.672 12:03:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:35.672 12:03:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:35.672 12:03:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:35.672 12:03:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:35.672 12:03:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:35.672 12:03:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:35.672 12:03:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:35.672 12:03:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:35.672 12:03:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:35.672 12:03:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:35.672 12:03:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:35.672 12:03:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:35.672 12:03:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:35.672 12:03:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:35.672 12:03:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:35.672 12:03:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:35.672 12:03:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.iU0cmpbBLM 00:11:35.672 12:03:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73044 00:11:35.672 12:03:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73044 00:11:35.672 12:03:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:35.672 12:03:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 73044 ']' 00:11:35.672 12:03:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:35.672 12:03:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:35.672 12:03:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:35.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:35.672 12:03:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:35.672 12:03:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.672 [2024-11-19 12:03:38.830634] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:11:35.672 [2024-11-19 12:03:38.830870] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73044 ] 00:11:35.672 [2024-11-19 12:03:39.011105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:35.940 [2024-11-19 12:03:39.135375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.198 [2024-11-19 12:03:39.342467] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:36.198 [2024-11-19 12:03:39.342534] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:36.460 12:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:36.460 12:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:36.460 12:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:36.460 12:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:36.460 12:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.460 12:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.460 BaseBdev1_malloc 00:11:36.460 12:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.460 12:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:36.460 12:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.460 12:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.460 true 00:11:36.460 12:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.460 12:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:36.460 12:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.460 12:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.460 [2024-11-19 12:03:39.734594] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:36.460 [2024-11-19 12:03:39.734707] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.460 [2024-11-19 12:03:39.734746] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:36.460 [2024-11-19 12:03:39.734787] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.460 [2024-11-19 12:03:39.736941] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.460 [2024-11-19 12:03:39.737026] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:36.460 BaseBdev1 00:11:36.460 12:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.460 12:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:36.460 12:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:36.460 12:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.460 12:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.460 BaseBdev2_malloc 00:11:36.460 12:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.460 12:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:36.460 12:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.460 12:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.460 true 00:11:36.460 12:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.460 12:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:36.460 12:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.460 12:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.460 [2024-11-19 12:03:39.801771] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:36.460 [2024-11-19 12:03:39.801869] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.460 [2024-11-19 12:03:39.801903] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:36.460 [2024-11-19 12:03:39.801934] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.460 [2024-11-19 12:03:39.803981] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.460 [2024-11-19 12:03:39.804068] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:36.460 BaseBdev2 00:11:36.460 12:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.460 12:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:36.460 12:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:36.460 12:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.460 12:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.722 BaseBdev3_malloc 00:11:36.722 12:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.722 12:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:36.722 12:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.722 12:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.722 true 00:11:36.722 12:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.722 12:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:36.722 12:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.722 12:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.722 [2024-11-19 12:03:39.889923] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:36.722 [2024-11-19 12:03:39.890043] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.722 [2024-11-19 12:03:39.890083] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:36.722 [2024-11-19 12:03:39.890119] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.722 [2024-11-19 12:03:39.892381] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.722 [2024-11-19 12:03:39.892472] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:36.722 BaseBdev3 00:11:36.723 12:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.723 12:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:36.723 12:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:36.723 12:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.723 12:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.723 BaseBdev4_malloc 00:11:36.723 12:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.723 12:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:36.723 12:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.723 12:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.723 true 00:11:36.723 12:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.723 12:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:36.723 12:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.723 12:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.723 [2024-11-19 12:03:39.956531] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:36.723 [2024-11-19 12:03:39.956630] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.723 [2024-11-19 12:03:39.956666] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:36.723 [2024-11-19 12:03:39.956695] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.723 [2024-11-19 12:03:39.958683] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.723 [2024-11-19 12:03:39.958758] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:36.723 BaseBdev4 00:11:36.723 12:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.723 12:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:36.723 12:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.723 12:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.723 [2024-11-19 12:03:39.968576] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:36.723 [2024-11-19 12:03:39.970390] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:36.723 [2024-11-19 12:03:39.970508] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:36.723 [2024-11-19 12:03:39.970609] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:36.723 [2024-11-19 12:03:39.970873] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:36.723 [2024-11-19 12:03:39.970924] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:36.723 [2024-11-19 12:03:39.971238] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:36.723 [2024-11-19 12:03:39.971439] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:36.723 [2024-11-19 12:03:39.971451] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:36.723 [2024-11-19 12:03:39.971636] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:36.723 12:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.723 12:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:36.723 12:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:36.723 12:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:36.723 12:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:36.723 12:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:36.723 12:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:36.723 12:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.723 12:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.723 12:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.723 12:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.723 12:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.723 12:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:36.723 12:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.723 12:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.723 12:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.723 12:03:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.723 "name": "raid_bdev1", 00:11:36.723 "uuid": "c951132c-533f-4ec0-9d22-280f93aa7069", 00:11:36.723 "strip_size_kb": 64, 00:11:36.723 "state": "online", 00:11:36.723 "raid_level": "concat", 00:11:36.723 "superblock": true, 00:11:36.723 "num_base_bdevs": 4, 00:11:36.723 "num_base_bdevs_discovered": 4, 00:11:36.723 "num_base_bdevs_operational": 4, 00:11:36.723 "base_bdevs_list": [ 00:11:36.723 { 00:11:36.723 "name": "BaseBdev1", 00:11:36.723 "uuid": "662983ab-a1f5-50dd-9822-806516d6c193", 00:11:36.723 "is_configured": true, 00:11:36.723 "data_offset": 2048, 00:11:36.723 "data_size": 63488 00:11:36.723 }, 00:11:36.723 { 00:11:36.723 "name": "BaseBdev2", 00:11:36.723 "uuid": "92ecda57-d841-5b1a-84bc-833d9b1e9f30", 00:11:36.723 "is_configured": true, 00:11:36.723 "data_offset": 2048, 00:11:36.723 "data_size": 63488 00:11:36.723 }, 00:11:36.723 { 00:11:36.723 "name": "BaseBdev3", 00:11:36.723 "uuid": "a9636dba-bef8-5863-9d15-24451c21ec96", 00:11:36.723 "is_configured": true, 00:11:36.723 "data_offset": 2048, 00:11:36.723 "data_size": 63488 00:11:36.723 }, 00:11:36.723 { 00:11:36.723 "name": "BaseBdev4", 00:11:36.723 "uuid": "b0cdf88e-9144-5ae0-b69a-56f99f04b85c", 00:11:36.723 "is_configured": true, 00:11:36.723 "data_offset": 2048, 00:11:36.723 "data_size": 63488 00:11:36.723 } 00:11:36.723 ] 00:11:36.723 }' 00:11:36.723 12:03:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.723 12:03:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.292 12:03:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:37.292 12:03:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:37.292 [2024-11-19 12:03:40.521003] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:38.231 12:03:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:38.231 12:03:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.231 12:03:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.231 12:03:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.231 12:03:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:38.231 12:03:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:38.231 12:03:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:38.231 12:03:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:38.231 12:03:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:38.231 12:03:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:38.231 12:03:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:38.231 12:03:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:38.231 12:03:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:38.231 12:03:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.231 12:03:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.231 12:03:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.231 12:03:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.231 12:03:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.231 12:03:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:38.231 12:03:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.231 12:03:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.231 12:03:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.231 12:03:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.231 "name": "raid_bdev1", 00:11:38.231 "uuid": "c951132c-533f-4ec0-9d22-280f93aa7069", 00:11:38.231 "strip_size_kb": 64, 00:11:38.231 "state": "online", 00:11:38.231 "raid_level": "concat", 00:11:38.231 "superblock": true, 00:11:38.231 "num_base_bdevs": 4, 00:11:38.231 "num_base_bdevs_discovered": 4, 00:11:38.231 "num_base_bdevs_operational": 4, 00:11:38.231 "base_bdevs_list": [ 00:11:38.231 { 00:11:38.231 "name": "BaseBdev1", 00:11:38.231 "uuid": "662983ab-a1f5-50dd-9822-806516d6c193", 00:11:38.231 "is_configured": true, 00:11:38.231 "data_offset": 2048, 00:11:38.231 "data_size": 63488 00:11:38.231 }, 00:11:38.231 { 00:11:38.231 "name": "BaseBdev2", 00:11:38.231 "uuid": "92ecda57-d841-5b1a-84bc-833d9b1e9f30", 00:11:38.231 "is_configured": true, 00:11:38.231 "data_offset": 2048, 00:11:38.231 "data_size": 63488 00:11:38.231 }, 00:11:38.231 { 00:11:38.231 "name": "BaseBdev3", 00:11:38.231 "uuid": "a9636dba-bef8-5863-9d15-24451c21ec96", 00:11:38.231 "is_configured": true, 00:11:38.231 "data_offset": 2048, 00:11:38.231 "data_size": 63488 00:11:38.231 }, 00:11:38.231 { 00:11:38.231 "name": "BaseBdev4", 00:11:38.231 "uuid": "b0cdf88e-9144-5ae0-b69a-56f99f04b85c", 00:11:38.231 "is_configured": true, 00:11:38.231 "data_offset": 2048, 00:11:38.231 "data_size": 63488 00:11:38.231 } 00:11:38.231 ] 00:11:38.231 }' 00:11:38.231 12:03:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.231 12:03:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.522 12:03:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:38.522 12:03:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.522 12:03:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.522 [2024-11-19 12:03:41.885069] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:38.522 [2024-11-19 12:03:41.885152] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:38.522 [2024-11-19 12:03:41.887852] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:38.522 [2024-11-19 12:03:41.887951] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:38.522 [2024-11-19 12:03:41.888024] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:38.522 [2024-11-19 12:03:41.888074] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:38.522 { 00:11:38.522 "results": [ 00:11:38.522 { 00:11:38.522 "job": "raid_bdev1", 00:11:38.522 "core_mask": "0x1", 00:11:38.522 "workload": "randrw", 00:11:38.522 "percentage": 50, 00:11:38.522 "status": "finished", 00:11:38.522 "queue_depth": 1, 00:11:38.522 "io_size": 131072, 00:11:38.522 "runtime": 1.364826, 00:11:38.522 "iops": 15675.258238046461, 00:11:38.522 "mibps": 1959.4072797558076, 00:11:38.522 "io_failed": 1, 00:11:38.522 "io_timeout": 0, 00:11:38.522 "avg_latency_us": 88.7254276240929, 00:11:38.522 "min_latency_us": 25.9353711790393, 00:11:38.522 "max_latency_us": 1430.9170305676855 00:11:38.522 } 00:11:38.522 ], 00:11:38.522 "core_count": 1 00:11:38.522 } 00:11:38.522 12:03:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.522 12:03:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73044 00:11:38.522 12:03:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 73044 ']' 00:11:38.522 12:03:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 73044 00:11:38.522 12:03:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:38.783 12:03:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:38.783 12:03:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73044 00:11:38.783 12:03:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:38.783 12:03:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:38.783 12:03:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73044' 00:11:38.783 killing process with pid 73044 00:11:38.783 12:03:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 73044 00:11:38.783 [2024-11-19 12:03:41.932747] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:38.783 12:03:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 73044 00:11:39.042 [2024-11-19 12:03:42.262639] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:40.423 12:03:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.iU0cmpbBLM 00:11:40.423 12:03:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:40.423 12:03:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:40.423 12:03:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:11:40.423 12:03:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:40.423 12:03:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:40.423 12:03:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:40.423 ************************************ 00:11:40.423 END TEST raid_write_error_test 00:11:40.423 ************************************ 00:11:40.423 12:03:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:11:40.423 00:11:40.423 real 0m4.732s 00:11:40.423 user 0m5.557s 00:11:40.423 sys 0m0.616s 00:11:40.423 12:03:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:40.423 12:03:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.423 12:03:43 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:40.423 12:03:43 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:11:40.423 12:03:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:40.423 12:03:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:40.423 12:03:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:40.423 ************************************ 00:11:40.423 START TEST raid_state_function_test 00:11:40.423 ************************************ 00:11:40.423 12:03:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:11:40.423 12:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:40.423 12:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:40.423 12:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:40.423 12:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:40.423 12:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:40.423 12:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:40.423 12:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:40.423 12:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:40.423 12:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:40.423 12:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:40.423 12:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:40.423 12:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:40.423 12:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:40.423 12:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:40.423 12:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:40.423 12:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:40.423 12:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:40.423 12:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:40.423 Process raid pid: 73189 00:11:40.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:40.424 12:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:40.424 12:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:40.424 12:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:40.424 12:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:40.424 12:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:40.424 12:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:40.424 12:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:40.424 12:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:40.424 12:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:40.424 12:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:40.424 12:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73189 00:11:40.424 12:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73189' 00:11:40.424 12:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73189 00:11:40.424 12:03:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 73189 ']' 00:11:40.424 12:03:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:40.424 12:03:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:40.424 12:03:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:40.424 12:03:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:40.424 12:03:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.424 12:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:40.424 [2024-11-19 12:03:43.607495] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:11:40.424 [2024-11-19 12:03:43.607694] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:40.424 [2024-11-19 12:03:43.781499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:40.683 [2024-11-19 12:03:43.907655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.943 [2024-11-19 12:03:44.118300] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:40.943 [2024-11-19 12:03:44.118426] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:41.202 12:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:41.202 12:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:41.202 12:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:41.202 12:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.202 12:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.202 [2024-11-19 12:03:44.440055] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:41.202 [2024-11-19 12:03:44.440172] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:41.202 [2024-11-19 12:03:44.440207] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:41.202 [2024-11-19 12:03:44.440234] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:41.202 [2024-11-19 12:03:44.440264] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:41.202 [2024-11-19 12:03:44.440285] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:41.202 [2024-11-19 12:03:44.440303] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:41.202 [2024-11-19 12:03:44.440342] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:41.202 12:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.202 12:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:41.202 12:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:41.202 12:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:41.202 12:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:41.202 12:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:41.202 12:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:41.202 12:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.202 12:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.202 12:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.202 12:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.202 12:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.202 12:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.202 12:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.202 12:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.202 12:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.202 12:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.202 "name": "Existed_Raid", 00:11:41.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.202 "strip_size_kb": 0, 00:11:41.203 "state": "configuring", 00:11:41.203 "raid_level": "raid1", 00:11:41.203 "superblock": false, 00:11:41.203 "num_base_bdevs": 4, 00:11:41.203 "num_base_bdevs_discovered": 0, 00:11:41.203 "num_base_bdevs_operational": 4, 00:11:41.203 "base_bdevs_list": [ 00:11:41.203 { 00:11:41.203 "name": "BaseBdev1", 00:11:41.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.203 "is_configured": false, 00:11:41.203 "data_offset": 0, 00:11:41.203 "data_size": 0 00:11:41.203 }, 00:11:41.203 { 00:11:41.203 "name": "BaseBdev2", 00:11:41.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.203 "is_configured": false, 00:11:41.203 "data_offset": 0, 00:11:41.203 "data_size": 0 00:11:41.203 }, 00:11:41.203 { 00:11:41.203 "name": "BaseBdev3", 00:11:41.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.203 "is_configured": false, 00:11:41.203 "data_offset": 0, 00:11:41.203 "data_size": 0 00:11:41.203 }, 00:11:41.203 { 00:11:41.203 "name": "BaseBdev4", 00:11:41.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.203 "is_configured": false, 00:11:41.203 "data_offset": 0, 00:11:41.203 "data_size": 0 00:11:41.203 } 00:11:41.203 ] 00:11:41.203 }' 00:11:41.203 12:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.203 12:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.778 12:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:41.778 12:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.778 12:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.778 [2024-11-19 12:03:44.871237] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:41.778 [2024-11-19 12:03:44.871285] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:41.778 12:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.778 12:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:41.778 12:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.778 12:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.778 [2024-11-19 12:03:44.883202] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:41.778 [2024-11-19 12:03:44.883290] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:41.778 [2024-11-19 12:03:44.883318] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:41.778 [2024-11-19 12:03:44.883341] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:41.778 [2024-11-19 12:03:44.883359] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:41.778 [2024-11-19 12:03:44.883380] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:41.778 [2024-11-19 12:03:44.883397] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:41.778 [2024-11-19 12:03:44.883433] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:41.778 12:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.778 12:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:41.778 12:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.778 12:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.778 [2024-11-19 12:03:44.930476] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:41.778 BaseBdev1 00:11:41.778 12:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.778 12:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:41.778 12:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:41.778 12:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:41.778 12:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:41.778 12:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:41.778 12:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:41.778 12:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:41.778 12:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.778 12:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.778 12:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.778 12:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:41.778 12:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.778 12:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.778 [ 00:11:41.778 { 00:11:41.778 "name": "BaseBdev1", 00:11:41.778 "aliases": [ 00:11:41.778 "4cffa242-2117-4d98-8201-9f92623ddba4" 00:11:41.778 ], 00:11:41.778 "product_name": "Malloc disk", 00:11:41.778 "block_size": 512, 00:11:41.778 "num_blocks": 65536, 00:11:41.778 "uuid": "4cffa242-2117-4d98-8201-9f92623ddba4", 00:11:41.778 "assigned_rate_limits": { 00:11:41.778 "rw_ios_per_sec": 0, 00:11:41.778 "rw_mbytes_per_sec": 0, 00:11:41.778 "r_mbytes_per_sec": 0, 00:11:41.778 "w_mbytes_per_sec": 0 00:11:41.778 }, 00:11:41.778 "claimed": true, 00:11:41.778 "claim_type": "exclusive_write", 00:11:41.778 "zoned": false, 00:11:41.778 "supported_io_types": { 00:11:41.778 "read": true, 00:11:41.778 "write": true, 00:11:41.778 "unmap": true, 00:11:41.778 "flush": true, 00:11:41.778 "reset": true, 00:11:41.778 "nvme_admin": false, 00:11:41.778 "nvme_io": false, 00:11:41.778 "nvme_io_md": false, 00:11:41.778 "write_zeroes": true, 00:11:41.778 "zcopy": true, 00:11:41.778 "get_zone_info": false, 00:11:41.778 "zone_management": false, 00:11:41.778 "zone_append": false, 00:11:41.778 "compare": false, 00:11:41.778 "compare_and_write": false, 00:11:41.778 "abort": true, 00:11:41.778 "seek_hole": false, 00:11:41.778 "seek_data": false, 00:11:41.778 "copy": true, 00:11:41.778 "nvme_iov_md": false 00:11:41.778 }, 00:11:41.778 "memory_domains": [ 00:11:41.778 { 00:11:41.778 "dma_device_id": "system", 00:11:41.778 "dma_device_type": 1 00:11:41.778 }, 00:11:41.778 { 00:11:41.778 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.778 "dma_device_type": 2 00:11:41.778 } 00:11:41.778 ], 00:11:41.778 "driver_specific": {} 00:11:41.778 } 00:11:41.778 ] 00:11:41.778 12:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.778 12:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:41.778 12:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:41.779 12:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:41.779 12:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:41.779 12:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:41.779 12:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:41.779 12:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:41.779 12:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.779 12:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.779 12:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.779 12:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.779 12:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.779 12:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.779 12:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.779 12:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.779 12:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.779 12:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.779 "name": "Existed_Raid", 00:11:41.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.779 "strip_size_kb": 0, 00:11:41.779 "state": "configuring", 00:11:41.779 "raid_level": "raid1", 00:11:41.779 "superblock": false, 00:11:41.779 "num_base_bdevs": 4, 00:11:41.779 "num_base_bdevs_discovered": 1, 00:11:41.779 "num_base_bdevs_operational": 4, 00:11:41.779 "base_bdevs_list": [ 00:11:41.779 { 00:11:41.779 "name": "BaseBdev1", 00:11:41.779 "uuid": "4cffa242-2117-4d98-8201-9f92623ddba4", 00:11:41.779 "is_configured": true, 00:11:41.779 "data_offset": 0, 00:11:41.779 "data_size": 65536 00:11:41.779 }, 00:11:41.779 { 00:11:41.779 "name": "BaseBdev2", 00:11:41.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.779 "is_configured": false, 00:11:41.779 "data_offset": 0, 00:11:41.779 "data_size": 0 00:11:41.779 }, 00:11:41.779 { 00:11:41.779 "name": "BaseBdev3", 00:11:41.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.779 "is_configured": false, 00:11:41.779 "data_offset": 0, 00:11:41.779 "data_size": 0 00:11:41.779 }, 00:11:41.779 { 00:11:41.779 "name": "BaseBdev4", 00:11:41.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.779 "is_configured": false, 00:11:41.779 "data_offset": 0, 00:11:41.779 "data_size": 0 00:11:41.779 } 00:11:41.779 ] 00:11:41.779 }' 00:11:41.779 12:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.779 12:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.047 12:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:42.047 12:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.047 12:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.047 [2024-11-19 12:03:45.405754] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:42.047 [2024-11-19 12:03:45.405818] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:42.047 12:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.047 12:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:42.047 12:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.047 12:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.047 [2024-11-19 12:03:45.417783] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:42.047 [2024-11-19 12:03:45.420038] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:42.047 [2024-11-19 12:03:45.420120] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:42.047 [2024-11-19 12:03:45.420151] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:42.047 [2024-11-19 12:03:45.420177] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:42.047 [2024-11-19 12:03:45.420197] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:42.047 [2024-11-19 12:03:45.420219] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:42.307 12:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.307 12:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:42.307 12:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:42.307 12:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:42.307 12:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:42.307 12:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:42.307 12:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:42.307 12:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:42.307 12:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:42.307 12:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.307 12:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.307 12:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.307 12:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.307 12:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.307 12:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.307 12:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.307 12:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:42.307 12:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.307 12:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.307 "name": "Existed_Raid", 00:11:42.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.307 "strip_size_kb": 0, 00:11:42.307 "state": "configuring", 00:11:42.307 "raid_level": "raid1", 00:11:42.307 "superblock": false, 00:11:42.307 "num_base_bdevs": 4, 00:11:42.307 "num_base_bdevs_discovered": 1, 00:11:42.307 "num_base_bdevs_operational": 4, 00:11:42.307 "base_bdevs_list": [ 00:11:42.307 { 00:11:42.307 "name": "BaseBdev1", 00:11:42.307 "uuid": "4cffa242-2117-4d98-8201-9f92623ddba4", 00:11:42.307 "is_configured": true, 00:11:42.307 "data_offset": 0, 00:11:42.307 "data_size": 65536 00:11:42.307 }, 00:11:42.307 { 00:11:42.307 "name": "BaseBdev2", 00:11:42.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.307 "is_configured": false, 00:11:42.307 "data_offset": 0, 00:11:42.307 "data_size": 0 00:11:42.307 }, 00:11:42.307 { 00:11:42.307 "name": "BaseBdev3", 00:11:42.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.307 "is_configured": false, 00:11:42.307 "data_offset": 0, 00:11:42.307 "data_size": 0 00:11:42.307 }, 00:11:42.307 { 00:11:42.307 "name": "BaseBdev4", 00:11:42.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.307 "is_configured": false, 00:11:42.307 "data_offset": 0, 00:11:42.307 "data_size": 0 00:11:42.307 } 00:11:42.307 ] 00:11:42.307 }' 00:11:42.307 12:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.307 12:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.568 12:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:42.568 12:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.568 12:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.568 [2024-11-19 12:03:45.893451] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:42.568 BaseBdev2 00:11:42.568 12:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.568 12:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:42.568 12:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:42.568 12:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:42.568 12:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:42.568 12:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:42.568 12:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:42.568 12:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:42.568 12:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.568 12:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.568 12:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.568 12:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:42.568 12:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.568 12:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.568 [ 00:11:42.568 { 00:11:42.568 "name": "BaseBdev2", 00:11:42.568 "aliases": [ 00:11:42.568 "4d39343f-8061-46e3-95ff-384311cc6b2a" 00:11:42.568 ], 00:11:42.568 "product_name": "Malloc disk", 00:11:42.568 "block_size": 512, 00:11:42.568 "num_blocks": 65536, 00:11:42.568 "uuid": "4d39343f-8061-46e3-95ff-384311cc6b2a", 00:11:42.568 "assigned_rate_limits": { 00:11:42.568 "rw_ios_per_sec": 0, 00:11:42.568 "rw_mbytes_per_sec": 0, 00:11:42.568 "r_mbytes_per_sec": 0, 00:11:42.568 "w_mbytes_per_sec": 0 00:11:42.568 }, 00:11:42.568 "claimed": true, 00:11:42.568 "claim_type": "exclusive_write", 00:11:42.568 "zoned": false, 00:11:42.568 "supported_io_types": { 00:11:42.568 "read": true, 00:11:42.568 "write": true, 00:11:42.568 "unmap": true, 00:11:42.568 "flush": true, 00:11:42.568 "reset": true, 00:11:42.568 "nvme_admin": false, 00:11:42.568 "nvme_io": false, 00:11:42.568 "nvme_io_md": false, 00:11:42.568 "write_zeroes": true, 00:11:42.568 "zcopy": true, 00:11:42.568 "get_zone_info": false, 00:11:42.568 "zone_management": false, 00:11:42.568 "zone_append": false, 00:11:42.568 "compare": false, 00:11:42.568 "compare_and_write": false, 00:11:42.568 "abort": true, 00:11:42.568 "seek_hole": false, 00:11:42.568 "seek_data": false, 00:11:42.568 "copy": true, 00:11:42.568 "nvme_iov_md": false 00:11:42.568 }, 00:11:42.568 "memory_domains": [ 00:11:42.568 { 00:11:42.568 "dma_device_id": "system", 00:11:42.568 "dma_device_type": 1 00:11:42.568 }, 00:11:42.568 { 00:11:42.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.568 "dma_device_type": 2 00:11:42.568 } 00:11:42.568 ], 00:11:42.568 "driver_specific": {} 00:11:42.568 } 00:11:42.568 ] 00:11:42.568 12:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.568 12:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:42.568 12:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:42.568 12:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:42.568 12:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:42.568 12:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:42.568 12:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:42.568 12:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:42.568 12:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:42.568 12:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:42.568 12:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.568 12:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.568 12:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.569 12:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.569 12:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.569 12:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:42.569 12:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.569 12:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.828 12:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.828 12:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.828 "name": "Existed_Raid", 00:11:42.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.828 "strip_size_kb": 0, 00:11:42.828 "state": "configuring", 00:11:42.828 "raid_level": "raid1", 00:11:42.828 "superblock": false, 00:11:42.828 "num_base_bdevs": 4, 00:11:42.828 "num_base_bdevs_discovered": 2, 00:11:42.828 "num_base_bdevs_operational": 4, 00:11:42.828 "base_bdevs_list": [ 00:11:42.828 { 00:11:42.828 "name": "BaseBdev1", 00:11:42.828 "uuid": "4cffa242-2117-4d98-8201-9f92623ddba4", 00:11:42.828 "is_configured": true, 00:11:42.828 "data_offset": 0, 00:11:42.828 "data_size": 65536 00:11:42.828 }, 00:11:42.828 { 00:11:42.828 "name": "BaseBdev2", 00:11:42.828 "uuid": "4d39343f-8061-46e3-95ff-384311cc6b2a", 00:11:42.828 "is_configured": true, 00:11:42.828 "data_offset": 0, 00:11:42.828 "data_size": 65536 00:11:42.828 }, 00:11:42.828 { 00:11:42.828 "name": "BaseBdev3", 00:11:42.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.828 "is_configured": false, 00:11:42.828 "data_offset": 0, 00:11:42.828 "data_size": 0 00:11:42.828 }, 00:11:42.828 { 00:11:42.828 "name": "BaseBdev4", 00:11:42.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.829 "is_configured": false, 00:11:42.829 "data_offset": 0, 00:11:42.829 "data_size": 0 00:11:42.829 } 00:11:42.829 ] 00:11:42.829 }' 00:11:42.829 12:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.829 12:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.088 12:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:43.088 12:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.088 12:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.088 BaseBdev3 00:11:43.088 [2024-11-19 12:03:46.456311] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:43.088 12:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.088 12:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:43.088 12:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:43.088 12:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:43.088 12:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:43.088 12:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:43.088 12:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:43.088 12:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:43.088 12:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.088 12:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.349 12:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.349 12:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:43.349 12:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.349 12:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.349 [ 00:11:43.349 { 00:11:43.349 "name": "BaseBdev3", 00:11:43.349 "aliases": [ 00:11:43.349 "c19d5b77-4629-4a02-adbd-46d639cd0b01" 00:11:43.349 ], 00:11:43.349 "product_name": "Malloc disk", 00:11:43.349 "block_size": 512, 00:11:43.349 "num_blocks": 65536, 00:11:43.349 "uuid": "c19d5b77-4629-4a02-adbd-46d639cd0b01", 00:11:43.349 "assigned_rate_limits": { 00:11:43.349 "rw_ios_per_sec": 0, 00:11:43.349 "rw_mbytes_per_sec": 0, 00:11:43.349 "r_mbytes_per_sec": 0, 00:11:43.349 "w_mbytes_per_sec": 0 00:11:43.349 }, 00:11:43.349 "claimed": true, 00:11:43.349 "claim_type": "exclusive_write", 00:11:43.349 "zoned": false, 00:11:43.349 "supported_io_types": { 00:11:43.349 "read": true, 00:11:43.349 "write": true, 00:11:43.349 "unmap": true, 00:11:43.349 "flush": true, 00:11:43.349 "reset": true, 00:11:43.349 "nvme_admin": false, 00:11:43.349 "nvme_io": false, 00:11:43.349 "nvme_io_md": false, 00:11:43.349 "write_zeroes": true, 00:11:43.349 "zcopy": true, 00:11:43.349 "get_zone_info": false, 00:11:43.349 "zone_management": false, 00:11:43.349 "zone_append": false, 00:11:43.349 "compare": false, 00:11:43.349 "compare_and_write": false, 00:11:43.349 "abort": true, 00:11:43.349 "seek_hole": false, 00:11:43.349 "seek_data": false, 00:11:43.349 "copy": true, 00:11:43.349 "nvme_iov_md": false 00:11:43.349 }, 00:11:43.349 "memory_domains": [ 00:11:43.349 { 00:11:43.349 "dma_device_id": "system", 00:11:43.349 "dma_device_type": 1 00:11:43.349 }, 00:11:43.349 { 00:11:43.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.349 "dma_device_type": 2 00:11:43.349 } 00:11:43.349 ], 00:11:43.349 "driver_specific": {} 00:11:43.349 } 00:11:43.349 ] 00:11:43.349 12:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.349 12:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:43.349 12:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:43.349 12:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:43.349 12:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:43.349 12:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:43.349 12:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:43.349 12:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:43.349 12:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:43.349 12:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:43.349 12:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:43.349 12:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:43.349 12:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:43.349 12:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:43.349 12:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:43.349 12:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.349 12:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.349 12:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.349 12:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.349 12:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:43.349 "name": "Existed_Raid", 00:11:43.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.349 "strip_size_kb": 0, 00:11:43.349 "state": "configuring", 00:11:43.349 "raid_level": "raid1", 00:11:43.349 "superblock": false, 00:11:43.349 "num_base_bdevs": 4, 00:11:43.349 "num_base_bdevs_discovered": 3, 00:11:43.349 "num_base_bdevs_operational": 4, 00:11:43.349 "base_bdevs_list": [ 00:11:43.349 { 00:11:43.349 "name": "BaseBdev1", 00:11:43.349 "uuid": "4cffa242-2117-4d98-8201-9f92623ddba4", 00:11:43.349 "is_configured": true, 00:11:43.349 "data_offset": 0, 00:11:43.349 "data_size": 65536 00:11:43.349 }, 00:11:43.349 { 00:11:43.349 "name": "BaseBdev2", 00:11:43.349 "uuid": "4d39343f-8061-46e3-95ff-384311cc6b2a", 00:11:43.349 "is_configured": true, 00:11:43.349 "data_offset": 0, 00:11:43.349 "data_size": 65536 00:11:43.349 }, 00:11:43.349 { 00:11:43.349 "name": "BaseBdev3", 00:11:43.349 "uuid": "c19d5b77-4629-4a02-adbd-46d639cd0b01", 00:11:43.349 "is_configured": true, 00:11:43.349 "data_offset": 0, 00:11:43.349 "data_size": 65536 00:11:43.349 }, 00:11:43.349 { 00:11:43.349 "name": "BaseBdev4", 00:11:43.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.349 "is_configured": false, 00:11:43.349 "data_offset": 0, 00:11:43.349 "data_size": 0 00:11:43.349 } 00:11:43.349 ] 00:11:43.349 }' 00:11:43.349 12:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:43.349 12:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.609 12:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:43.609 12:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.609 12:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.609 [2024-11-19 12:03:46.981557] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:43.609 [2024-11-19 12:03:46.981618] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:43.609 [2024-11-19 12:03:46.981626] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:43.609 [2024-11-19 12:03:46.981896] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:43.609 [2024-11-19 12:03:46.982095] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:43.609 [2024-11-19 12:03:46.982112] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:43.609 [2024-11-19 12:03:46.982406] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:43.869 BaseBdev4 00:11:43.869 12:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.869 12:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:43.869 12:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:43.869 12:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:43.869 12:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:43.869 12:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:43.869 12:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:43.869 12:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:43.869 12:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.869 12:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.869 12:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.869 12:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:43.869 12:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.869 12:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.869 [ 00:11:43.869 { 00:11:43.869 "name": "BaseBdev4", 00:11:43.869 "aliases": [ 00:11:43.869 "cd795c4d-f8b7-446e-8568-8af7cc110234" 00:11:43.869 ], 00:11:43.869 "product_name": "Malloc disk", 00:11:43.869 "block_size": 512, 00:11:43.869 "num_blocks": 65536, 00:11:43.869 "uuid": "cd795c4d-f8b7-446e-8568-8af7cc110234", 00:11:43.869 "assigned_rate_limits": { 00:11:43.869 "rw_ios_per_sec": 0, 00:11:43.869 "rw_mbytes_per_sec": 0, 00:11:43.869 "r_mbytes_per_sec": 0, 00:11:43.869 "w_mbytes_per_sec": 0 00:11:43.869 }, 00:11:43.869 "claimed": true, 00:11:43.869 "claim_type": "exclusive_write", 00:11:43.869 "zoned": false, 00:11:43.869 "supported_io_types": { 00:11:43.869 "read": true, 00:11:43.869 "write": true, 00:11:43.869 "unmap": true, 00:11:43.869 "flush": true, 00:11:43.869 "reset": true, 00:11:43.869 "nvme_admin": false, 00:11:43.869 "nvme_io": false, 00:11:43.869 "nvme_io_md": false, 00:11:43.869 "write_zeroes": true, 00:11:43.869 "zcopy": true, 00:11:43.869 "get_zone_info": false, 00:11:43.869 "zone_management": false, 00:11:43.869 "zone_append": false, 00:11:43.869 "compare": false, 00:11:43.869 "compare_and_write": false, 00:11:43.869 "abort": true, 00:11:43.869 "seek_hole": false, 00:11:43.869 "seek_data": false, 00:11:43.869 "copy": true, 00:11:43.869 "nvme_iov_md": false 00:11:43.869 }, 00:11:43.869 "memory_domains": [ 00:11:43.869 { 00:11:43.869 "dma_device_id": "system", 00:11:43.869 "dma_device_type": 1 00:11:43.869 }, 00:11:43.869 { 00:11:43.869 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.869 "dma_device_type": 2 00:11:43.869 } 00:11:43.869 ], 00:11:43.869 "driver_specific": {} 00:11:43.869 } 00:11:43.869 ] 00:11:43.869 12:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.869 12:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:43.869 12:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:43.869 12:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:43.869 12:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:43.869 12:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:43.869 12:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:43.869 12:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:43.869 12:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:43.869 12:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:43.869 12:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:43.869 12:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:43.869 12:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:43.869 12:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:43.869 12:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.869 12:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.869 12:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.870 12:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:43.870 12:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.870 12:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:43.870 "name": "Existed_Raid", 00:11:43.870 "uuid": "263d49cd-90ea-48ae-85f1-a3ba8ac0ac80", 00:11:43.870 "strip_size_kb": 0, 00:11:43.870 "state": "online", 00:11:43.870 "raid_level": "raid1", 00:11:43.870 "superblock": false, 00:11:43.870 "num_base_bdevs": 4, 00:11:43.870 "num_base_bdevs_discovered": 4, 00:11:43.870 "num_base_bdevs_operational": 4, 00:11:43.870 "base_bdevs_list": [ 00:11:43.870 { 00:11:43.870 "name": "BaseBdev1", 00:11:43.870 "uuid": "4cffa242-2117-4d98-8201-9f92623ddba4", 00:11:43.870 "is_configured": true, 00:11:43.870 "data_offset": 0, 00:11:43.870 "data_size": 65536 00:11:43.870 }, 00:11:43.870 { 00:11:43.870 "name": "BaseBdev2", 00:11:43.870 "uuid": "4d39343f-8061-46e3-95ff-384311cc6b2a", 00:11:43.870 "is_configured": true, 00:11:43.870 "data_offset": 0, 00:11:43.870 "data_size": 65536 00:11:43.870 }, 00:11:43.870 { 00:11:43.870 "name": "BaseBdev3", 00:11:43.870 "uuid": "c19d5b77-4629-4a02-adbd-46d639cd0b01", 00:11:43.870 "is_configured": true, 00:11:43.870 "data_offset": 0, 00:11:43.870 "data_size": 65536 00:11:43.870 }, 00:11:43.870 { 00:11:43.870 "name": "BaseBdev4", 00:11:43.870 "uuid": "cd795c4d-f8b7-446e-8568-8af7cc110234", 00:11:43.870 "is_configured": true, 00:11:43.870 "data_offset": 0, 00:11:43.870 "data_size": 65536 00:11:43.870 } 00:11:43.870 ] 00:11:43.870 }' 00:11:43.870 12:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:43.870 12:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.129 12:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:44.129 12:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:44.129 12:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:44.129 12:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:44.129 12:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:44.129 12:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:44.129 12:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:44.129 12:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.129 12:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.129 12:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:44.129 [2024-11-19 12:03:47.493078] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:44.388 12:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.388 12:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:44.388 "name": "Existed_Raid", 00:11:44.388 "aliases": [ 00:11:44.388 "263d49cd-90ea-48ae-85f1-a3ba8ac0ac80" 00:11:44.388 ], 00:11:44.388 "product_name": "Raid Volume", 00:11:44.388 "block_size": 512, 00:11:44.388 "num_blocks": 65536, 00:11:44.388 "uuid": "263d49cd-90ea-48ae-85f1-a3ba8ac0ac80", 00:11:44.388 "assigned_rate_limits": { 00:11:44.388 "rw_ios_per_sec": 0, 00:11:44.388 "rw_mbytes_per_sec": 0, 00:11:44.388 "r_mbytes_per_sec": 0, 00:11:44.388 "w_mbytes_per_sec": 0 00:11:44.388 }, 00:11:44.388 "claimed": false, 00:11:44.388 "zoned": false, 00:11:44.388 "supported_io_types": { 00:11:44.388 "read": true, 00:11:44.388 "write": true, 00:11:44.388 "unmap": false, 00:11:44.388 "flush": false, 00:11:44.388 "reset": true, 00:11:44.388 "nvme_admin": false, 00:11:44.388 "nvme_io": false, 00:11:44.388 "nvme_io_md": false, 00:11:44.388 "write_zeroes": true, 00:11:44.388 "zcopy": false, 00:11:44.388 "get_zone_info": false, 00:11:44.388 "zone_management": false, 00:11:44.388 "zone_append": false, 00:11:44.388 "compare": false, 00:11:44.388 "compare_and_write": false, 00:11:44.388 "abort": false, 00:11:44.388 "seek_hole": false, 00:11:44.388 "seek_data": false, 00:11:44.388 "copy": false, 00:11:44.388 "nvme_iov_md": false 00:11:44.388 }, 00:11:44.388 "memory_domains": [ 00:11:44.388 { 00:11:44.388 "dma_device_id": "system", 00:11:44.388 "dma_device_type": 1 00:11:44.388 }, 00:11:44.388 { 00:11:44.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.388 "dma_device_type": 2 00:11:44.388 }, 00:11:44.388 { 00:11:44.388 "dma_device_id": "system", 00:11:44.388 "dma_device_type": 1 00:11:44.388 }, 00:11:44.388 { 00:11:44.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.388 "dma_device_type": 2 00:11:44.388 }, 00:11:44.388 { 00:11:44.388 "dma_device_id": "system", 00:11:44.388 "dma_device_type": 1 00:11:44.388 }, 00:11:44.388 { 00:11:44.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.388 "dma_device_type": 2 00:11:44.388 }, 00:11:44.388 { 00:11:44.388 "dma_device_id": "system", 00:11:44.388 "dma_device_type": 1 00:11:44.388 }, 00:11:44.388 { 00:11:44.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.388 "dma_device_type": 2 00:11:44.388 } 00:11:44.388 ], 00:11:44.388 "driver_specific": { 00:11:44.388 "raid": { 00:11:44.388 "uuid": "263d49cd-90ea-48ae-85f1-a3ba8ac0ac80", 00:11:44.388 "strip_size_kb": 0, 00:11:44.388 "state": "online", 00:11:44.388 "raid_level": "raid1", 00:11:44.388 "superblock": false, 00:11:44.388 "num_base_bdevs": 4, 00:11:44.388 "num_base_bdevs_discovered": 4, 00:11:44.388 "num_base_bdevs_operational": 4, 00:11:44.388 "base_bdevs_list": [ 00:11:44.388 { 00:11:44.388 "name": "BaseBdev1", 00:11:44.388 "uuid": "4cffa242-2117-4d98-8201-9f92623ddba4", 00:11:44.388 "is_configured": true, 00:11:44.388 "data_offset": 0, 00:11:44.388 "data_size": 65536 00:11:44.388 }, 00:11:44.388 { 00:11:44.388 "name": "BaseBdev2", 00:11:44.388 "uuid": "4d39343f-8061-46e3-95ff-384311cc6b2a", 00:11:44.388 "is_configured": true, 00:11:44.389 "data_offset": 0, 00:11:44.389 "data_size": 65536 00:11:44.389 }, 00:11:44.389 { 00:11:44.389 "name": "BaseBdev3", 00:11:44.389 "uuid": "c19d5b77-4629-4a02-adbd-46d639cd0b01", 00:11:44.389 "is_configured": true, 00:11:44.389 "data_offset": 0, 00:11:44.389 "data_size": 65536 00:11:44.389 }, 00:11:44.389 { 00:11:44.389 "name": "BaseBdev4", 00:11:44.389 "uuid": "cd795c4d-f8b7-446e-8568-8af7cc110234", 00:11:44.389 "is_configured": true, 00:11:44.389 "data_offset": 0, 00:11:44.389 "data_size": 65536 00:11:44.389 } 00:11:44.389 ] 00:11:44.389 } 00:11:44.389 } 00:11:44.389 }' 00:11:44.389 12:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:44.389 12:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:44.389 BaseBdev2 00:11:44.389 BaseBdev3 00:11:44.389 BaseBdev4' 00:11:44.389 12:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:44.389 12:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:44.389 12:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:44.389 12:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:44.389 12:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:44.389 12:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.389 12:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.389 12:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.389 12:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:44.389 12:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:44.389 12:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:44.389 12:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:44.389 12:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:44.389 12:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.389 12:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.389 12:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.389 12:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:44.389 12:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:44.389 12:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:44.389 12:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:44.389 12:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:44.389 12:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.389 12:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.389 12:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.389 12:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:44.389 12:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:44.389 12:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:44.648 12:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:44.648 12:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.648 12:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.648 12:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:44.648 12:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.648 12:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:44.648 12:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:44.648 12:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:44.648 12:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.648 12:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.648 [2024-11-19 12:03:47.816214] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:44.648 12:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.648 12:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:44.648 12:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:44.648 12:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:44.649 12:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:44.649 12:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:44.649 12:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:44.649 12:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:44.649 12:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:44.649 12:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:44.649 12:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:44.649 12:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:44.649 12:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:44.649 12:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:44.649 12:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:44.649 12:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:44.649 12:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.649 12:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:44.649 12:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.649 12:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.649 12:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.649 12:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:44.649 "name": "Existed_Raid", 00:11:44.649 "uuid": "263d49cd-90ea-48ae-85f1-a3ba8ac0ac80", 00:11:44.649 "strip_size_kb": 0, 00:11:44.649 "state": "online", 00:11:44.649 "raid_level": "raid1", 00:11:44.649 "superblock": false, 00:11:44.649 "num_base_bdevs": 4, 00:11:44.649 "num_base_bdevs_discovered": 3, 00:11:44.649 "num_base_bdevs_operational": 3, 00:11:44.649 "base_bdevs_list": [ 00:11:44.649 { 00:11:44.649 "name": null, 00:11:44.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.649 "is_configured": false, 00:11:44.649 "data_offset": 0, 00:11:44.649 "data_size": 65536 00:11:44.649 }, 00:11:44.649 { 00:11:44.649 "name": "BaseBdev2", 00:11:44.649 "uuid": "4d39343f-8061-46e3-95ff-384311cc6b2a", 00:11:44.649 "is_configured": true, 00:11:44.649 "data_offset": 0, 00:11:44.649 "data_size": 65536 00:11:44.649 }, 00:11:44.649 { 00:11:44.649 "name": "BaseBdev3", 00:11:44.649 "uuid": "c19d5b77-4629-4a02-adbd-46d639cd0b01", 00:11:44.649 "is_configured": true, 00:11:44.649 "data_offset": 0, 00:11:44.649 "data_size": 65536 00:11:44.649 }, 00:11:44.649 { 00:11:44.649 "name": "BaseBdev4", 00:11:44.649 "uuid": "cd795c4d-f8b7-446e-8568-8af7cc110234", 00:11:44.649 "is_configured": true, 00:11:44.649 "data_offset": 0, 00:11:44.649 "data_size": 65536 00:11:44.649 } 00:11:44.649 ] 00:11:44.649 }' 00:11:44.649 12:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:44.649 12:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.218 12:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:45.218 12:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:45.218 12:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.218 12:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:45.218 12:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.218 12:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.218 12:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.218 12:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:45.218 12:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:45.218 12:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:45.218 12:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.218 12:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.218 [2024-11-19 12:03:48.423369] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:45.218 12:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.218 12:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:45.218 12:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:45.218 12:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.218 12:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.218 12:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.218 12:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:45.218 12:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.218 12:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:45.218 12:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:45.218 12:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:45.218 12:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.218 12:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.218 [2024-11-19 12:03:48.578101] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:45.478 12:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.478 12:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:45.478 12:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:45.478 12:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.478 12:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:45.478 12:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.478 12:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.478 12:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.478 12:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:45.478 12:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:45.478 12:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:45.478 12:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.478 12:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.478 [2024-11-19 12:03:48.730531] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:45.478 [2024-11-19 12:03:48.730703] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:45.478 [2024-11-19 12:03:48.822481] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:45.478 [2024-11-19 12:03:48.822603] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:45.478 [2024-11-19 12:03:48.822644] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:45.478 12:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.478 12:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:45.478 12:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:45.478 12:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.478 12:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:45.478 12:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.478 12:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.478 12:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.739 12:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:45.739 12:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:45.739 12:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:45.739 12:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:45.739 12:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:45.739 12:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:45.739 12:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.739 12:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.739 BaseBdev2 00:11:45.739 12:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.739 12:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:45.739 12:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:45.739 12:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:45.739 12:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:45.739 12:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:45.739 12:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:45.739 12:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:45.739 12:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.739 12:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.739 12:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.739 12:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:45.739 12:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.739 12:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.739 [ 00:11:45.739 { 00:11:45.739 "name": "BaseBdev2", 00:11:45.739 "aliases": [ 00:11:45.739 "bc8193e8-4c32-4379-9b7e-18abddc5bad4" 00:11:45.739 ], 00:11:45.739 "product_name": "Malloc disk", 00:11:45.739 "block_size": 512, 00:11:45.739 "num_blocks": 65536, 00:11:45.739 "uuid": "bc8193e8-4c32-4379-9b7e-18abddc5bad4", 00:11:45.739 "assigned_rate_limits": { 00:11:45.739 "rw_ios_per_sec": 0, 00:11:45.739 "rw_mbytes_per_sec": 0, 00:11:45.739 "r_mbytes_per_sec": 0, 00:11:45.739 "w_mbytes_per_sec": 0 00:11:45.739 }, 00:11:45.739 "claimed": false, 00:11:45.739 "zoned": false, 00:11:45.739 "supported_io_types": { 00:11:45.739 "read": true, 00:11:45.739 "write": true, 00:11:45.739 "unmap": true, 00:11:45.739 "flush": true, 00:11:45.739 "reset": true, 00:11:45.739 "nvme_admin": false, 00:11:45.739 "nvme_io": false, 00:11:45.739 "nvme_io_md": false, 00:11:45.739 "write_zeroes": true, 00:11:45.739 "zcopy": true, 00:11:45.739 "get_zone_info": false, 00:11:45.739 "zone_management": false, 00:11:45.739 "zone_append": false, 00:11:45.739 "compare": false, 00:11:45.739 "compare_and_write": false, 00:11:45.739 "abort": true, 00:11:45.739 "seek_hole": false, 00:11:45.739 "seek_data": false, 00:11:45.739 "copy": true, 00:11:45.739 "nvme_iov_md": false 00:11:45.739 }, 00:11:45.739 "memory_domains": [ 00:11:45.739 { 00:11:45.739 "dma_device_id": "system", 00:11:45.739 "dma_device_type": 1 00:11:45.739 }, 00:11:45.739 { 00:11:45.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.739 "dma_device_type": 2 00:11:45.739 } 00:11:45.739 ], 00:11:45.739 "driver_specific": {} 00:11:45.739 } 00:11:45.739 ] 00:11:45.739 12:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.739 12:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:45.739 12:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:45.739 12:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:45.739 12:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:45.739 12:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.739 12:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.739 BaseBdev3 00:11:45.739 12:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.739 12:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:45.739 12:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:45.739 12:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:45.739 12:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:45.739 12:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:45.739 12:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:45.739 12:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:45.739 12:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.739 12:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.739 12:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.739 12:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:45.739 12:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.739 12:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.739 [ 00:11:45.739 { 00:11:45.739 "name": "BaseBdev3", 00:11:45.739 "aliases": [ 00:11:45.739 "ae8f475e-4676-4863-a31b-33a44bcbe0c8" 00:11:45.739 ], 00:11:45.739 "product_name": "Malloc disk", 00:11:45.739 "block_size": 512, 00:11:45.739 "num_blocks": 65536, 00:11:45.739 "uuid": "ae8f475e-4676-4863-a31b-33a44bcbe0c8", 00:11:45.739 "assigned_rate_limits": { 00:11:45.739 "rw_ios_per_sec": 0, 00:11:45.739 "rw_mbytes_per_sec": 0, 00:11:45.740 "r_mbytes_per_sec": 0, 00:11:45.740 "w_mbytes_per_sec": 0 00:11:45.740 }, 00:11:45.740 "claimed": false, 00:11:45.740 "zoned": false, 00:11:45.740 "supported_io_types": { 00:11:45.740 "read": true, 00:11:45.740 "write": true, 00:11:45.740 "unmap": true, 00:11:45.740 "flush": true, 00:11:45.740 "reset": true, 00:11:45.740 "nvme_admin": false, 00:11:45.740 "nvme_io": false, 00:11:45.740 "nvme_io_md": false, 00:11:45.740 "write_zeroes": true, 00:11:45.740 "zcopy": true, 00:11:45.740 "get_zone_info": false, 00:11:45.740 "zone_management": false, 00:11:45.740 "zone_append": false, 00:11:45.740 "compare": false, 00:11:45.740 "compare_and_write": false, 00:11:45.740 "abort": true, 00:11:45.740 "seek_hole": false, 00:11:45.740 "seek_data": false, 00:11:45.740 "copy": true, 00:11:45.740 "nvme_iov_md": false 00:11:45.740 }, 00:11:45.740 "memory_domains": [ 00:11:45.740 { 00:11:45.740 "dma_device_id": "system", 00:11:45.740 "dma_device_type": 1 00:11:45.740 }, 00:11:45.740 { 00:11:45.740 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.740 "dma_device_type": 2 00:11:45.740 } 00:11:45.740 ], 00:11:45.740 "driver_specific": {} 00:11:45.740 } 00:11:45.740 ] 00:11:45.740 12:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.740 12:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:45.740 12:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:45.740 12:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:45.740 12:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:45.740 12:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.740 12:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.740 BaseBdev4 00:11:45.740 12:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.740 12:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:45.740 12:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:45.740 12:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:45.740 12:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:45.740 12:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:45.740 12:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:45.740 12:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:45.740 12:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.740 12:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.740 12:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.740 12:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:45.740 12:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.740 12:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.740 [ 00:11:45.740 { 00:11:45.740 "name": "BaseBdev4", 00:11:45.740 "aliases": [ 00:11:45.740 "d7df90f7-095b-49cb-aa85-7a83db333c9f" 00:11:45.740 ], 00:11:45.740 "product_name": "Malloc disk", 00:11:45.740 "block_size": 512, 00:11:45.740 "num_blocks": 65536, 00:11:45.740 "uuid": "d7df90f7-095b-49cb-aa85-7a83db333c9f", 00:11:45.740 "assigned_rate_limits": { 00:11:45.740 "rw_ios_per_sec": 0, 00:11:45.740 "rw_mbytes_per_sec": 0, 00:11:45.740 "r_mbytes_per_sec": 0, 00:11:45.740 "w_mbytes_per_sec": 0 00:11:45.740 }, 00:11:45.740 "claimed": false, 00:11:45.740 "zoned": false, 00:11:45.740 "supported_io_types": { 00:11:45.740 "read": true, 00:11:45.740 "write": true, 00:11:45.740 "unmap": true, 00:11:45.740 "flush": true, 00:11:45.740 "reset": true, 00:11:45.740 "nvme_admin": false, 00:11:45.740 "nvme_io": false, 00:11:45.740 "nvme_io_md": false, 00:11:45.740 "write_zeroes": true, 00:11:45.740 "zcopy": true, 00:11:45.740 "get_zone_info": false, 00:11:45.740 "zone_management": false, 00:11:45.740 "zone_append": false, 00:11:45.740 "compare": false, 00:11:45.740 "compare_and_write": false, 00:11:45.740 "abort": true, 00:11:45.740 "seek_hole": false, 00:11:45.740 "seek_data": false, 00:11:45.740 "copy": true, 00:11:45.740 "nvme_iov_md": false 00:11:45.740 }, 00:11:45.740 "memory_domains": [ 00:11:45.740 { 00:11:45.740 "dma_device_id": "system", 00:11:45.740 "dma_device_type": 1 00:11:45.740 }, 00:11:45.740 { 00:11:45.740 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.740 "dma_device_type": 2 00:11:45.740 } 00:11:45.740 ], 00:11:45.740 "driver_specific": {} 00:11:45.740 } 00:11:45.740 ] 00:11:45.740 12:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.740 12:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:45.740 12:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:45.740 12:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:45.740 12:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:45.740 12:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.740 12:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.013 [2024-11-19 12:03:49.115188] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:46.013 [2024-11-19 12:03:49.115244] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:46.013 [2024-11-19 12:03:49.115263] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:46.013 [2024-11-19 12:03:49.117048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:46.013 [2024-11-19 12:03:49.117111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:46.013 12:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.013 12:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:46.013 12:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:46.013 12:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:46.013 12:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:46.013 12:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:46.013 12:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:46.013 12:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.013 12:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.013 12:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.013 12:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.014 12:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.014 12:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:46.014 12:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.014 12:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.014 12:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.014 12:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.014 "name": "Existed_Raid", 00:11:46.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.014 "strip_size_kb": 0, 00:11:46.014 "state": "configuring", 00:11:46.014 "raid_level": "raid1", 00:11:46.014 "superblock": false, 00:11:46.014 "num_base_bdevs": 4, 00:11:46.014 "num_base_bdevs_discovered": 3, 00:11:46.014 "num_base_bdevs_operational": 4, 00:11:46.014 "base_bdevs_list": [ 00:11:46.014 { 00:11:46.014 "name": "BaseBdev1", 00:11:46.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.014 "is_configured": false, 00:11:46.014 "data_offset": 0, 00:11:46.014 "data_size": 0 00:11:46.014 }, 00:11:46.014 { 00:11:46.014 "name": "BaseBdev2", 00:11:46.014 "uuid": "bc8193e8-4c32-4379-9b7e-18abddc5bad4", 00:11:46.014 "is_configured": true, 00:11:46.014 "data_offset": 0, 00:11:46.014 "data_size": 65536 00:11:46.014 }, 00:11:46.014 { 00:11:46.014 "name": "BaseBdev3", 00:11:46.014 "uuid": "ae8f475e-4676-4863-a31b-33a44bcbe0c8", 00:11:46.014 "is_configured": true, 00:11:46.014 "data_offset": 0, 00:11:46.014 "data_size": 65536 00:11:46.014 }, 00:11:46.014 { 00:11:46.014 "name": "BaseBdev4", 00:11:46.014 "uuid": "d7df90f7-095b-49cb-aa85-7a83db333c9f", 00:11:46.014 "is_configured": true, 00:11:46.014 "data_offset": 0, 00:11:46.014 "data_size": 65536 00:11:46.014 } 00:11:46.014 ] 00:11:46.014 }' 00:11:46.014 12:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.014 12:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.275 12:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:46.275 12:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.275 12:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.275 [2024-11-19 12:03:49.562560] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:46.275 12:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.275 12:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:46.276 12:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:46.276 12:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:46.276 12:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:46.276 12:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:46.276 12:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:46.276 12:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.276 12:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.276 12:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.276 12:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.276 12:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.276 12:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.276 12:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.276 12:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:46.276 12:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.276 12:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.276 "name": "Existed_Raid", 00:11:46.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.276 "strip_size_kb": 0, 00:11:46.276 "state": "configuring", 00:11:46.276 "raid_level": "raid1", 00:11:46.276 "superblock": false, 00:11:46.276 "num_base_bdevs": 4, 00:11:46.276 "num_base_bdevs_discovered": 2, 00:11:46.276 "num_base_bdevs_operational": 4, 00:11:46.276 "base_bdevs_list": [ 00:11:46.276 { 00:11:46.276 "name": "BaseBdev1", 00:11:46.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.276 "is_configured": false, 00:11:46.276 "data_offset": 0, 00:11:46.276 "data_size": 0 00:11:46.276 }, 00:11:46.276 { 00:11:46.276 "name": null, 00:11:46.276 "uuid": "bc8193e8-4c32-4379-9b7e-18abddc5bad4", 00:11:46.276 "is_configured": false, 00:11:46.276 "data_offset": 0, 00:11:46.276 "data_size": 65536 00:11:46.276 }, 00:11:46.276 { 00:11:46.276 "name": "BaseBdev3", 00:11:46.276 "uuid": "ae8f475e-4676-4863-a31b-33a44bcbe0c8", 00:11:46.276 "is_configured": true, 00:11:46.276 "data_offset": 0, 00:11:46.276 "data_size": 65536 00:11:46.276 }, 00:11:46.276 { 00:11:46.276 "name": "BaseBdev4", 00:11:46.276 "uuid": "d7df90f7-095b-49cb-aa85-7a83db333c9f", 00:11:46.276 "is_configured": true, 00:11:46.276 "data_offset": 0, 00:11:46.276 "data_size": 65536 00:11:46.276 } 00:11:46.276 ] 00:11:46.276 }' 00:11:46.276 12:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.276 12:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.845 12:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.845 12:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.845 12:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.845 12:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:46.845 12:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.845 12:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:46.845 12:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:46.845 12:03:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.845 12:03:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.845 [2024-11-19 12:03:50.053644] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:46.845 BaseBdev1 00:11:46.845 12:03:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.845 12:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:46.845 12:03:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:46.845 12:03:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:46.846 12:03:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:46.846 12:03:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:46.846 12:03:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:46.846 12:03:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:46.846 12:03:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.846 12:03:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.846 12:03:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.846 12:03:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:46.846 12:03:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.846 12:03:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.846 [ 00:11:46.846 { 00:11:46.846 "name": "BaseBdev1", 00:11:46.846 "aliases": [ 00:11:46.846 "5d739fdf-0a40-411c-b46f-5bf5841f9536" 00:11:46.846 ], 00:11:46.846 "product_name": "Malloc disk", 00:11:46.846 "block_size": 512, 00:11:46.846 "num_blocks": 65536, 00:11:46.846 "uuid": "5d739fdf-0a40-411c-b46f-5bf5841f9536", 00:11:46.846 "assigned_rate_limits": { 00:11:46.846 "rw_ios_per_sec": 0, 00:11:46.846 "rw_mbytes_per_sec": 0, 00:11:46.846 "r_mbytes_per_sec": 0, 00:11:46.846 "w_mbytes_per_sec": 0 00:11:46.846 }, 00:11:46.846 "claimed": true, 00:11:46.846 "claim_type": "exclusive_write", 00:11:46.846 "zoned": false, 00:11:46.846 "supported_io_types": { 00:11:46.846 "read": true, 00:11:46.846 "write": true, 00:11:46.846 "unmap": true, 00:11:46.846 "flush": true, 00:11:46.846 "reset": true, 00:11:46.846 "nvme_admin": false, 00:11:46.846 "nvme_io": false, 00:11:46.846 "nvme_io_md": false, 00:11:46.846 "write_zeroes": true, 00:11:46.846 "zcopy": true, 00:11:46.846 "get_zone_info": false, 00:11:46.846 "zone_management": false, 00:11:46.846 "zone_append": false, 00:11:46.846 "compare": false, 00:11:46.846 "compare_and_write": false, 00:11:46.846 "abort": true, 00:11:46.846 "seek_hole": false, 00:11:46.846 "seek_data": false, 00:11:46.846 "copy": true, 00:11:46.846 "nvme_iov_md": false 00:11:46.846 }, 00:11:46.846 "memory_domains": [ 00:11:46.846 { 00:11:46.846 "dma_device_id": "system", 00:11:46.846 "dma_device_type": 1 00:11:46.846 }, 00:11:46.846 { 00:11:46.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.846 "dma_device_type": 2 00:11:46.846 } 00:11:46.846 ], 00:11:46.846 "driver_specific": {} 00:11:46.846 } 00:11:46.846 ] 00:11:46.846 12:03:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.846 12:03:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:46.846 12:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:46.846 12:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:46.846 12:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:46.846 12:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:46.846 12:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:46.846 12:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:46.846 12:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.846 12:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.846 12:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.846 12:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.846 12:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.846 12:03:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.846 12:03:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.846 12:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:46.846 12:03:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.846 12:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.846 "name": "Existed_Raid", 00:11:46.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.846 "strip_size_kb": 0, 00:11:46.846 "state": "configuring", 00:11:46.846 "raid_level": "raid1", 00:11:46.846 "superblock": false, 00:11:46.846 "num_base_bdevs": 4, 00:11:46.846 "num_base_bdevs_discovered": 3, 00:11:46.846 "num_base_bdevs_operational": 4, 00:11:46.846 "base_bdevs_list": [ 00:11:46.846 { 00:11:46.846 "name": "BaseBdev1", 00:11:46.846 "uuid": "5d739fdf-0a40-411c-b46f-5bf5841f9536", 00:11:46.846 "is_configured": true, 00:11:46.846 "data_offset": 0, 00:11:46.846 "data_size": 65536 00:11:46.846 }, 00:11:46.846 { 00:11:46.846 "name": null, 00:11:46.846 "uuid": "bc8193e8-4c32-4379-9b7e-18abddc5bad4", 00:11:46.846 "is_configured": false, 00:11:46.846 "data_offset": 0, 00:11:46.846 "data_size": 65536 00:11:46.846 }, 00:11:46.846 { 00:11:46.846 "name": "BaseBdev3", 00:11:46.846 "uuid": "ae8f475e-4676-4863-a31b-33a44bcbe0c8", 00:11:46.846 "is_configured": true, 00:11:46.846 "data_offset": 0, 00:11:46.846 "data_size": 65536 00:11:46.846 }, 00:11:46.846 { 00:11:46.846 "name": "BaseBdev4", 00:11:46.846 "uuid": "d7df90f7-095b-49cb-aa85-7a83db333c9f", 00:11:46.846 "is_configured": true, 00:11:46.846 "data_offset": 0, 00:11:46.846 "data_size": 65536 00:11:46.846 } 00:11:46.846 ] 00:11:46.846 }' 00:11:46.846 12:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.846 12:03:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.451 12:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.451 12:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:47.451 12:03:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.451 12:03:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.451 12:03:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.451 12:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:47.451 12:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:47.451 12:03:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.451 12:03:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.451 [2024-11-19 12:03:50.576846] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:47.451 12:03:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.451 12:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:47.451 12:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:47.451 12:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:47.451 12:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:47.451 12:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:47.451 12:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:47.451 12:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.452 12:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.452 12:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.452 12:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.452 12:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:47.452 12:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.452 12:03:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.452 12:03:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.452 12:03:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.452 12:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.452 "name": "Existed_Raid", 00:11:47.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.452 "strip_size_kb": 0, 00:11:47.452 "state": "configuring", 00:11:47.452 "raid_level": "raid1", 00:11:47.452 "superblock": false, 00:11:47.452 "num_base_bdevs": 4, 00:11:47.452 "num_base_bdevs_discovered": 2, 00:11:47.452 "num_base_bdevs_operational": 4, 00:11:47.452 "base_bdevs_list": [ 00:11:47.452 { 00:11:47.452 "name": "BaseBdev1", 00:11:47.452 "uuid": "5d739fdf-0a40-411c-b46f-5bf5841f9536", 00:11:47.452 "is_configured": true, 00:11:47.452 "data_offset": 0, 00:11:47.452 "data_size": 65536 00:11:47.452 }, 00:11:47.452 { 00:11:47.452 "name": null, 00:11:47.452 "uuid": "bc8193e8-4c32-4379-9b7e-18abddc5bad4", 00:11:47.452 "is_configured": false, 00:11:47.452 "data_offset": 0, 00:11:47.452 "data_size": 65536 00:11:47.452 }, 00:11:47.452 { 00:11:47.452 "name": null, 00:11:47.452 "uuid": "ae8f475e-4676-4863-a31b-33a44bcbe0c8", 00:11:47.452 "is_configured": false, 00:11:47.452 "data_offset": 0, 00:11:47.452 "data_size": 65536 00:11:47.452 }, 00:11:47.452 { 00:11:47.452 "name": "BaseBdev4", 00:11:47.452 "uuid": "d7df90f7-095b-49cb-aa85-7a83db333c9f", 00:11:47.452 "is_configured": true, 00:11:47.452 "data_offset": 0, 00:11:47.452 "data_size": 65536 00:11:47.452 } 00:11:47.452 ] 00:11:47.452 }' 00:11:47.452 12:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.452 12:03:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.712 12:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.712 12:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:47.712 12:03:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.712 12:03:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.712 12:03:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.712 12:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:47.712 12:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:47.712 12:03:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.712 12:03:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.712 [2024-11-19 12:03:51.032103] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:47.712 12:03:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.712 12:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:47.712 12:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:47.712 12:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:47.712 12:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:47.712 12:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:47.712 12:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:47.712 12:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.712 12:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.712 12:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.712 12:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.712 12:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.712 12:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:47.712 12:03:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.712 12:03:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.712 12:03:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.971 12:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.971 "name": "Existed_Raid", 00:11:47.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.971 "strip_size_kb": 0, 00:11:47.971 "state": "configuring", 00:11:47.971 "raid_level": "raid1", 00:11:47.971 "superblock": false, 00:11:47.971 "num_base_bdevs": 4, 00:11:47.971 "num_base_bdevs_discovered": 3, 00:11:47.971 "num_base_bdevs_operational": 4, 00:11:47.971 "base_bdevs_list": [ 00:11:47.971 { 00:11:47.971 "name": "BaseBdev1", 00:11:47.971 "uuid": "5d739fdf-0a40-411c-b46f-5bf5841f9536", 00:11:47.972 "is_configured": true, 00:11:47.972 "data_offset": 0, 00:11:47.972 "data_size": 65536 00:11:47.972 }, 00:11:47.972 { 00:11:47.972 "name": null, 00:11:47.972 "uuid": "bc8193e8-4c32-4379-9b7e-18abddc5bad4", 00:11:47.972 "is_configured": false, 00:11:47.972 "data_offset": 0, 00:11:47.972 "data_size": 65536 00:11:47.972 }, 00:11:47.972 { 00:11:47.972 "name": "BaseBdev3", 00:11:47.972 "uuid": "ae8f475e-4676-4863-a31b-33a44bcbe0c8", 00:11:47.972 "is_configured": true, 00:11:47.972 "data_offset": 0, 00:11:47.972 "data_size": 65536 00:11:47.972 }, 00:11:47.972 { 00:11:47.972 "name": "BaseBdev4", 00:11:47.972 "uuid": "d7df90f7-095b-49cb-aa85-7a83db333c9f", 00:11:47.972 "is_configured": true, 00:11:47.972 "data_offset": 0, 00:11:47.972 "data_size": 65536 00:11:47.972 } 00:11:47.972 ] 00:11:47.972 }' 00:11:47.972 12:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.972 12:03:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.232 12:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.232 12:03:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.232 12:03:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.232 12:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:48.232 12:03:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.232 12:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:48.232 12:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:48.232 12:03:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.232 12:03:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.232 [2024-11-19 12:03:51.535230] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:48.491 12:03:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.491 12:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:48.491 12:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:48.491 12:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:48.491 12:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:48.491 12:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:48.491 12:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:48.491 12:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.491 12:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.491 12:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.491 12:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.491 12:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.491 12:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:48.491 12:03:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.491 12:03:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.491 12:03:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.491 12:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.491 "name": "Existed_Raid", 00:11:48.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.491 "strip_size_kb": 0, 00:11:48.491 "state": "configuring", 00:11:48.491 "raid_level": "raid1", 00:11:48.491 "superblock": false, 00:11:48.491 "num_base_bdevs": 4, 00:11:48.491 "num_base_bdevs_discovered": 2, 00:11:48.491 "num_base_bdevs_operational": 4, 00:11:48.491 "base_bdevs_list": [ 00:11:48.491 { 00:11:48.491 "name": null, 00:11:48.491 "uuid": "5d739fdf-0a40-411c-b46f-5bf5841f9536", 00:11:48.491 "is_configured": false, 00:11:48.491 "data_offset": 0, 00:11:48.491 "data_size": 65536 00:11:48.491 }, 00:11:48.491 { 00:11:48.491 "name": null, 00:11:48.491 "uuid": "bc8193e8-4c32-4379-9b7e-18abddc5bad4", 00:11:48.491 "is_configured": false, 00:11:48.491 "data_offset": 0, 00:11:48.491 "data_size": 65536 00:11:48.491 }, 00:11:48.491 { 00:11:48.491 "name": "BaseBdev3", 00:11:48.491 "uuid": "ae8f475e-4676-4863-a31b-33a44bcbe0c8", 00:11:48.491 "is_configured": true, 00:11:48.491 "data_offset": 0, 00:11:48.491 "data_size": 65536 00:11:48.491 }, 00:11:48.491 { 00:11:48.491 "name": "BaseBdev4", 00:11:48.491 "uuid": "d7df90f7-095b-49cb-aa85-7a83db333c9f", 00:11:48.491 "is_configured": true, 00:11:48.491 "data_offset": 0, 00:11:48.491 "data_size": 65536 00:11:48.491 } 00:11:48.491 ] 00:11:48.491 }' 00:11:48.491 12:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.491 12:03:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.750 12:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.750 12:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:48.750 12:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.750 12:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.750 12:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.750 12:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:48.750 12:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:48.750 12:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.750 12:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.750 [2024-11-19 12:03:52.109830] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:48.750 12:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.750 12:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:48.750 12:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:48.750 12:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:48.750 12:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:48.750 12:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:48.750 12:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:48.750 12:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.750 12:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.750 12:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.750 12:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.750 12:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:48.750 12:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.750 12:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.750 12:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.010 12:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.010 12:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.010 "name": "Existed_Raid", 00:11:49.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.010 "strip_size_kb": 0, 00:11:49.010 "state": "configuring", 00:11:49.010 "raid_level": "raid1", 00:11:49.010 "superblock": false, 00:11:49.010 "num_base_bdevs": 4, 00:11:49.010 "num_base_bdevs_discovered": 3, 00:11:49.010 "num_base_bdevs_operational": 4, 00:11:49.010 "base_bdevs_list": [ 00:11:49.010 { 00:11:49.010 "name": null, 00:11:49.010 "uuid": "5d739fdf-0a40-411c-b46f-5bf5841f9536", 00:11:49.010 "is_configured": false, 00:11:49.010 "data_offset": 0, 00:11:49.010 "data_size": 65536 00:11:49.010 }, 00:11:49.010 { 00:11:49.010 "name": "BaseBdev2", 00:11:49.010 "uuid": "bc8193e8-4c32-4379-9b7e-18abddc5bad4", 00:11:49.010 "is_configured": true, 00:11:49.010 "data_offset": 0, 00:11:49.010 "data_size": 65536 00:11:49.010 }, 00:11:49.010 { 00:11:49.010 "name": "BaseBdev3", 00:11:49.010 "uuid": "ae8f475e-4676-4863-a31b-33a44bcbe0c8", 00:11:49.010 "is_configured": true, 00:11:49.010 "data_offset": 0, 00:11:49.010 "data_size": 65536 00:11:49.010 }, 00:11:49.010 { 00:11:49.010 "name": "BaseBdev4", 00:11:49.010 "uuid": "d7df90f7-095b-49cb-aa85-7a83db333c9f", 00:11:49.010 "is_configured": true, 00:11:49.010 "data_offset": 0, 00:11:49.010 "data_size": 65536 00:11:49.010 } 00:11:49.010 ] 00:11:49.010 }' 00:11:49.010 12:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.010 12:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.269 12:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:49.269 12:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.269 12:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.269 12:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.269 12:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.269 12:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:49.269 12:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.269 12:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:49.269 12:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.269 12:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.269 12:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.529 12:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 5d739fdf-0a40-411c-b46f-5bf5841f9536 00:11:49.529 12:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.529 12:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.529 [2024-11-19 12:03:52.694171] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:49.529 [2024-11-19 12:03:52.694304] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:49.529 [2024-11-19 12:03:52.694319] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:49.529 [2024-11-19 12:03:52.694597] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:49.529 [2024-11-19 12:03:52.694760] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:49.529 [2024-11-19 12:03:52.694770] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:49.529 [2024-11-19 12:03:52.695073] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:49.529 NewBaseBdev 00:11:49.529 12:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.529 12:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:49.529 12:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:49.529 12:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:49.529 12:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:49.529 12:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:49.529 12:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:49.529 12:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:49.529 12:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.529 12:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.529 12:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.529 12:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:49.529 12:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.529 12:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.529 [ 00:11:49.529 { 00:11:49.529 "name": "NewBaseBdev", 00:11:49.529 "aliases": [ 00:11:49.529 "5d739fdf-0a40-411c-b46f-5bf5841f9536" 00:11:49.529 ], 00:11:49.529 "product_name": "Malloc disk", 00:11:49.529 "block_size": 512, 00:11:49.529 "num_blocks": 65536, 00:11:49.529 "uuid": "5d739fdf-0a40-411c-b46f-5bf5841f9536", 00:11:49.529 "assigned_rate_limits": { 00:11:49.529 "rw_ios_per_sec": 0, 00:11:49.529 "rw_mbytes_per_sec": 0, 00:11:49.529 "r_mbytes_per_sec": 0, 00:11:49.529 "w_mbytes_per_sec": 0 00:11:49.529 }, 00:11:49.529 "claimed": true, 00:11:49.529 "claim_type": "exclusive_write", 00:11:49.529 "zoned": false, 00:11:49.529 "supported_io_types": { 00:11:49.529 "read": true, 00:11:49.529 "write": true, 00:11:49.529 "unmap": true, 00:11:49.529 "flush": true, 00:11:49.529 "reset": true, 00:11:49.529 "nvme_admin": false, 00:11:49.529 "nvme_io": false, 00:11:49.529 "nvme_io_md": false, 00:11:49.529 "write_zeroes": true, 00:11:49.529 "zcopy": true, 00:11:49.529 "get_zone_info": false, 00:11:49.529 "zone_management": false, 00:11:49.529 "zone_append": false, 00:11:49.529 "compare": false, 00:11:49.529 "compare_and_write": false, 00:11:49.529 "abort": true, 00:11:49.529 "seek_hole": false, 00:11:49.529 "seek_data": false, 00:11:49.529 "copy": true, 00:11:49.529 "nvme_iov_md": false 00:11:49.529 }, 00:11:49.529 "memory_domains": [ 00:11:49.529 { 00:11:49.529 "dma_device_id": "system", 00:11:49.529 "dma_device_type": 1 00:11:49.529 }, 00:11:49.529 { 00:11:49.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.529 "dma_device_type": 2 00:11:49.529 } 00:11:49.529 ], 00:11:49.529 "driver_specific": {} 00:11:49.529 } 00:11:49.529 ] 00:11:49.529 12:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.529 12:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:49.529 12:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:49.529 12:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:49.529 12:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:49.529 12:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:49.529 12:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:49.529 12:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:49.529 12:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.529 12:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.529 12:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.529 12:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.529 12:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.529 12:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.529 12:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:49.529 12:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.529 12:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.529 12:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.529 "name": "Existed_Raid", 00:11:49.529 "uuid": "d64af4c7-2780-41ea-9147-4c6249e89a2c", 00:11:49.529 "strip_size_kb": 0, 00:11:49.529 "state": "online", 00:11:49.529 "raid_level": "raid1", 00:11:49.529 "superblock": false, 00:11:49.529 "num_base_bdevs": 4, 00:11:49.529 "num_base_bdevs_discovered": 4, 00:11:49.529 "num_base_bdevs_operational": 4, 00:11:49.529 "base_bdevs_list": [ 00:11:49.529 { 00:11:49.529 "name": "NewBaseBdev", 00:11:49.529 "uuid": "5d739fdf-0a40-411c-b46f-5bf5841f9536", 00:11:49.529 "is_configured": true, 00:11:49.529 "data_offset": 0, 00:11:49.529 "data_size": 65536 00:11:49.529 }, 00:11:49.529 { 00:11:49.529 "name": "BaseBdev2", 00:11:49.530 "uuid": "bc8193e8-4c32-4379-9b7e-18abddc5bad4", 00:11:49.530 "is_configured": true, 00:11:49.530 "data_offset": 0, 00:11:49.530 "data_size": 65536 00:11:49.530 }, 00:11:49.530 { 00:11:49.530 "name": "BaseBdev3", 00:11:49.530 "uuid": "ae8f475e-4676-4863-a31b-33a44bcbe0c8", 00:11:49.530 "is_configured": true, 00:11:49.530 "data_offset": 0, 00:11:49.530 "data_size": 65536 00:11:49.530 }, 00:11:49.530 { 00:11:49.530 "name": "BaseBdev4", 00:11:49.530 "uuid": "d7df90f7-095b-49cb-aa85-7a83db333c9f", 00:11:49.530 "is_configured": true, 00:11:49.530 "data_offset": 0, 00:11:49.530 "data_size": 65536 00:11:49.530 } 00:11:49.530 ] 00:11:49.530 }' 00:11:49.530 12:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.530 12:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.100 12:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:50.100 12:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:50.100 12:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:50.100 12:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:50.100 12:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:50.100 12:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:50.100 12:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:50.100 12:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:50.100 12:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.100 12:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.100 [2024-11-19 12:03:53.205679] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:50.100 12:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.100 12:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:50.100 "name": "Existed_Raid", 00:11:50.100 "aliases": [ 00:11:50.100 "d64af4c7-2780-41ea-9147-4c6249e89a2c" 00:11:50.100 ], 00:11:50.100 "product_name": "Raid Volume", 00:11:50.100 "block_size": 512, 00:11:50.100 "num_blocks": 65536, 00:11:50.100 "uuid": "d64af4c7-2780-41ea-9147-4c6249e89a2c", 00:11:50.100 "assigned_rate_limits": { 00:11:50.100 "rw_ios_per_sec": 0, 00:11:50.100 "rw_mbytes_per_sec": 0, 00:11:50.100 "r_mbytes_per_sec": 0, 00:11:50.100 "w_mbytes_per_sec": 0 00:11:50.100 }, 00:11:50.100 "claimed": false, 00:11:50.100 "zoned": false, 00:11:50.100 "supported_io_types": { 00:11:50.100 "read": true, 00:11:50.100 "write": true, 00:11:50.100 "unmap": false, 00:11:50.100 "flush": false, 00:11:50.100 "reset": true, 00:11:50.100 "nvme_admin": false, 00:11:50.100 "nvme_io": false, 00:11:50.100 "nvme_io_md": false, 00:11:50.100 "write_zeroes": true, 00:11:50.100 "zcopy": false, 00:11:50.100 "get_zone_info": false, 00:11:50.100 "zone_management": false, 00:11:50.100 "zone_append": false, 00:11:50.100 "compare": false, 00:11:50.100 "compare_and_write": false, 00:11:50.100 "abort": false, 00:11:50.100 "seek_hole": false, 00:11:50.100 "seek_data": false, 00:11:50.100 "copy": false, 00:11:50.100 "nvme_iov_md": false 00:11:50.100 }, 00:11:50.100 "memory_domains": [ 00:11:50.100 { 00:11:50.100 "dma_device_id": "system", 00:11:50.100 "dma_device_type": 1 00:11:50.100 }, 00:11:50.100 { 00:11:50.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.100 "dma_device_type": 2 00:11:50.100 }, 00:11:50.100 { 00:11:50.100 "dma_device_id": "system", 00:11:50.100 "dma_device_type": 1 00:11:50.100 }, 00:11:50.100 { 00:11:50.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.100 "dma_device_type": 2 00:11:50.100 }, 00:11:50.100 { 00:11:50.100 "dma_device_id": "system", 00:11:50.100 "dma_device_type": 1 00:11:50.100 }, 00:11:50.100 { 00:11:50.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.100 "dma_device_type": 2 00:11:50.100 }, 00:11:50.100 { 00:11:50.100 "dma_device_id": "system", 00:11:50.100 "dma_device_type": 1 00:11:50.100 }, 00:11:50.100 { 00:11:50.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.100 "dma_device_type": 2 00:11:50.100 } 00:11:50.100 ], 00:11:50.100 "driver_specific": { 00:11:50.100 "raid": { 00:11:50.100 "uuid": "d64af4c7-2780-41ea-9147-4c6249e89a2c", 00:11:50.100 "strip_size_kb": 0, 00:11:50.100 "state": "online", 00:11:50.100 "raid_level": "raid1", 00:11:50.100 "superblock": false, 00:11:50.100 "num_base_bdevs": 4, 00:11:50.100 "num_base_bdevs_discovered": 4, 00:11:50.100 "num_base_bdevs_operational": 4, 00:11:50.100 "base_bdevs_list": [ 00:11:50.100 { 00:11:50.100 "name": "NewBaseBdev", 00:11:50.100 "uuid": "5d739fdf-0a40-411c-b46f-5bf5841f9536", 00:11:50.100 "is_configured": true, 00:11:50.100 "data_offset": 0, 00:11:50.100 "data_size": 65536 00:11:50.100 }, 00:11:50.100 { 00:11:50.100 "name": "BaseBdev2", 00:11:50.100 "uuid": "bc8193e8-4c32-4379-9b7e-18abddc5bad4", 00:11:50.100 "is_configured": true, 00:11:50.100 "data_offset": 0, 00:11:50.100 "data_size": 65536 00:11:50.100 }, 00:11:50.100 { 00:11:50.100 "name": "BaseBdev3", 00:11:50.100 "uuid": "ae8f475e-4676-4863-a31b-33a44bcbe0c8", 00:11:50.100 "is_configured": true, 00:11:50.100 "data_offset": 0, 00:11:50.100 "data_size": 65536 00:11:50.100 }, 00:11:50.100 { 00:11:50.100 "name": "BaseBdev4", 00:11:50.100 "uuid": "d7df90f7-095b-49cb-aa85-7a83db333c9f", 00:11:50.100 "is_configured": true, 00:11:50.100 "data_offset": 0, 00:11:50.100 "data_size": 65536 00:11:50.100 } 00:11:50.100 ] 00:11:50.100 } 00:11:50.100 } 00:11:50.100 }' 00:11:50.100 12:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:50.100 12:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:50.100 BaseBdev2 00:11:50.100 BaseBdev3 00:11:50.100 BaseBdev4' 00:11:50.100 12:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:50.100 12:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:50.100 12:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:50.100 12:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:50.100 12:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:50.100 12:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.100 12:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.100 12:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.100 12:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:50.100 12:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:50.100 12:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:50.100 12:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:50.100 12:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:50.100 12:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.100 12:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.101 12:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.101 12:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:50.101 12:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:50.101 12:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:50.101 12:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:50.101 12:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:50.101 12:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.101 12:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.101 12:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.361 12:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:50.361 12:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:50.361 12:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:50.361 12:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:50.361 12:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:50.361 12:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.361 12:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.361 12:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.361 12:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:50.361 12:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:50.361 12:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:50.361 12:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.361 12:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.361 [2024-11-19 12:03:53.532781] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:50.361 [2024-11-19 12:03:53.532814] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:50.361 [2024-11-19 12:03:53.532923] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:50.361 [2024-11-19 12:03:53.533261] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:50.361 [2024-11-19 12:03:53.533278] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:50.361 12:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.361 12:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73189 00:11:50.361 12:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 73189 ']' 00:11:50.361 12:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 73189 00:11:50.361 12:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:50.361 12:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:50.361 12:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73189 00:11:50.361 12:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:50.361 12:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:50.361 killing process with pid 73189 00:11:50.361 12:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73189' 00:11:50.361 12:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 73189 00:11:50.361 [2024-11-19 12:03:53.578498] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:50.361 12:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 73189 00:11:50.620 [2024-11-19 12:03:53.979010] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:52.000 12:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:52.000 ************************************ 00:11:52.000 END TEST raid_state_function_test 00:11:52.000 ************************************ 00:11:52.000 00:11:52.000 real 0m11.554s 00:11:52.000 user 0m18.308s 00:11:52.000 sys 0m2.153s 00:11:52.000 12:03:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:52.000 12:03:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.000 12:03:55 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:11:52.000 12:03:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:52.000 12:03:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:52.000 12:03:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:52.000 ************************************ 00:11:52.000 START TEST raid_state_function_test_sb 00:11:52.000 ************************************ 00:11:52.000 12:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:11:52.000 12:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:52.000 12:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:52.000 12:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:52.000 12:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:52.000 12:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:52.000 12:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:52.000 12:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:52.000 12:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:52.000 12:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:52.000 12:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:52.000 12:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:52.000 12:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:52.000 12:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:52.000 12:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:52.000 12:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:52.000 12:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:52.000 12:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:52.001 12:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:52.001 12:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:52.001 12:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:52.001 12:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:52.001 12:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:52.001 12:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:52.001 12:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:52.001 12:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:52.001 12:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:52.001 12:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:52.001 12:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:52.001 12:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73861 00:11:52.001 12:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:52.001 12:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73861' 00:11:52.001 Process raid pid: 73861 00:11:52.001 12:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73861 00:11:52.001 12:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 73861 ']' 00:11:52.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:52.001 12:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:52.001 12:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:52.001 12:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:52.001 12:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:52.001 12:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.001 [2024-11-19 12:03:55.250895] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:11:52.001 [2024-11-19 12:03:55.251245] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:52.261 [2024-11-19 12:03:55.439891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:52.261 [2024-11-19 12:03:55.559797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:52.521 [2024-11-19 12:03:55.768389] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:52.521 [2024-11-19 12:03:55.768514] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:52.780 12:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:52.780 12:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:52.780 12:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:52.780 12:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.780 12:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.780 [2024-11-19 12:03:56.070991] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:52.780 [2024-11-19 12:03:56.071066] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:52.780 [2024-11-19 12:03:56.071077] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:52.780 [2024-11-19 12:03:56.071087] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:52.780 [2024-11-19 12:03:56.071093] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:52.780 [2024-11-19 12:03:56.071102] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:52.780 [2024-11-19 12:03:56.071113] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:52.780 [2024-11-19 12:03:56.071122] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:52.780 12:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.780 12:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:52.780 12:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:52.780 12:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:52.780 12:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:52.780 12:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:52.780 12:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:52.780 12:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.780 12:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.780 12:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.780 12:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.780 12:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.780 12:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:52.780 12:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.780 12:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.780 12:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.780 12:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.780 "name": "Existed_Raid", 00:11:52.780 "uuid": "b5794cfc-8fd4-438c-84a2-b6ea51cc349f", 00:11:52.780 "strip_size_kb": 0, 00:11:52.780 "state": "configuring", 00:11:52.780 "raid_level": "raid1", 00:11:52.780 "superblock": true, 00:11:52.780 "num_base_bdevs": 4, 00:11:52.780 "num_base_bdevs_discovered": 0, 00:11:52.780 "num_base_bdevs_operational": 4, 00:11:52.780 "base_bdevs_list": [ 00:11:52.780 { 00:11:52.780 "name": "BaseBdev1", 00:11:52.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.780 "is_configured": false, 00:11:52.780 "data_offset": 0, 00:11:52.780 "data_size": 0 00:11:52.780 }, 00:11:52.780 { 00:11:52.780 "name": "BaseBdev2", 00:11:52.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.780 "is_configured": false, 00:11:52.780 "data_offset": 0, 00:11:52.780 "data_size": 0 00:11:52.780 }, 00:11:52.780 { 00:11:52.780 "name": "BaseBdev3", 00:11:52.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.780 "is_configured": false, 00:11:52.780 "data_offset": 0, 00:11:52.780 "data_size": 0 00:11:52.780 }, 00:11:52.780 { 00:11:52.780 "name": "BaseBdev4", 00:11:52.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.780 "is_configured": false, 00:11:52.780 "data_offset": 0, 00:11:52.780 "data_size": 0 00:11:52.780 } 00:11:52.780 ] 00:11:52.780 }' 00:11:52.780 12:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.780 12:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.353 12:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:53.353 12:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.353 12:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.353 [2024-11-19 12:03:56.538119] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:53.353 [2024-11-19 12:03:56.538230] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:53.353 12:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.353 12:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:53.353 12:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.353 12:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.353 [2024-11-19 12:03:56.546108] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:53.353 [2024-11-19 12:03:56.546185] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:53.353 [2024-11-19 12:03:56.546213] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:53.353 [2024-11-19 12:03:56.546236] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:53.353 [2024-11-19 12:03:56.546255] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:53.353 [2024-11-19 12:03:56.546276] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:53.353 [2024-11-19 12:03:56.546294] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:53.353 [2024-11-19 12:03:56.546315] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:53.353 12:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.353 12:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:53.353 12:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.353 12:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.353 [2024-11-19 12:03:56.593606] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:53.353 BaseBdev1 00:11:53.353 12:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.353 12:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:53.353 12:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:53.353 12:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:53.353 12:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:53.353 12:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:53.353 12:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:53.353 12:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:53.353 12:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.353 12:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.353 12:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.353 12:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:53.353 12:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.353 12:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.353 [ 00:11:53.353 { 00:11:53.353 "name": "BaseBdev1", 00:11:53.353 "aliases": [ 00:11:53.353 "3f54d667-19b3-4645-8601-d7d5aaf340a6" 00:11:53.353 ], 00:11:53.353 "product_name": "Malloc disk", 00:11:53.353 "block_size": 512, 00:11:53.353 "num_blocks": 65536, 00:11:53.353 "uuid": "3f54d667-19b3-4645-8601-d7d5aaf340a6", 00:11:53.353 "assigned_rate_limits": { 00:11:53.353 "rw_ios_per_sec": 0, 00:11:53.353 "rw_mbytes_per_sec": 0, 00:11:53.353 "r_mbytes_per_sec": 0, 00:11:53.353 "w_mbytes_per_sec": 0 00:11:53.353 }, 00:11:53.353 "claimed": true, 00:11:53.353 "claim_type": "exclusive_write", 00:11:53.353 "zoned": false, 00:11:53.353 "supported_io_types": { 00:11:53.353 "read": true, 00:11:53.353 "write": true, 00:11:53.353 "unmap": true, 00:11:53.353 "flush": true, 00:11:53.353 "reset": true, 00:11:53.353 "nvme_admin": false, 00:11:53.353 "nvme_io": false, 00:11:53.353 "nvme_io_md": false, 00:11:53.354 "write_zeroes": true, 00:11:53.354 "zcopy": true, 00:11:53.354 "get_zone_info": false, 00:11:53.354 "zone_management": false, 00:11:53.354 "zone_append": false, 00:11:53.354 "compare": false, 00:11:53.354 "compare_and_write": false, 00:11:53.354 "abort": true, 00:11:53.354 "seek_hole": false, 00:11:53.354 "seek_data": false, 00:11:53.354 "copy": true, 00:11:53.354 "nvme_iov_md": false 00:11:53.354 }, 00:11:53.354 "memory_domains": [ 00:11:53.354 { 00:11:53.354 "dma_device_id": "system", 00:11:53.354 "dma_device_type": 1 00:11:53.354 }, 00:11:53.354 { 00:11:53.354 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.354 "dma_device_type": 2 00:11:53.354 } 00:11:53.354 ], 00:11:53.354 "driver_specific": {} 00:11:53.354 } 00:11:53.354 ] 00:11:53.354 12:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.354 12:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:53.354 12:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:53.354 12:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:53.354 12:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:53.354 12:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:53.354 12:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:53.354 12:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:53.354 12:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.354 12:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.354 12:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.354 12:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.354 12:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.354 12:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:53.354 12:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.354 12:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.354 12:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.354 12:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.354 "name": "Existed_Raid", 00:11:53.354 "uuid": "997f4811-c7fe-4297-8ac4-45e935689a52", 00:11:53.354 "strip_size_kb": 0, 00:11:53.354 "state": "configuring", 00:11:53.354 "raid_level": "raid1", 00:11:53.354 "superblock": true, 00:11:53.354 "num_base_bdevs": 4, 00:11:53.354 "num_base_bdevs_discovered": 1, 00:11:53.354 "num_base_bdevs_operational": 4, 00:11:53.354 "base_bdevs_list": [ 00:11:53.354 { 00:11:53.354 "name": "BaseBdev1", 00:11:53.354 "uuid": "3f54d667-19b3-4645-8601-d7d5aaf340a6", 00:11:53.354 "is_configured": true, 00:11:53.354 "data_offset": 2048, 00:11:53.354 "data_size": 63488 00:11:53.354 }, 00:11:53.354 { 00:11:53.354 "name": "BaseBdev2", 00:11:53.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.354 "is_configured": false, 00:11:53.354 "data_offset": 0, 00:11:53.354 "data_size": 0 00:11:53.354 }, 00:11:53.354 { 00:11:53.354 "name": "BaseBdev3", 00:11:53.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.354 "is_configured": false, 00:11:53.354 "data_offset": 0, 00:11:53.354 "data_size": 0 00:11:53.354 }, 00:11:53.354 { 00:11:53.354 "name": "BaseBdev4", 00:11:53.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.354 "is_configured": false, 00:11:53.354 "data_offset": 0, 00:11:53.354 "data_size": 0 00:11:53.354 } 00:11:53.354 ] 00:11:53.354 }' 00:11:53.354 12:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.354 12:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.924 12:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:53.924 12:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.924 12:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.924 [2024-11-19 12:03:57.056876] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:53.924 [2024-11-19 12:03:57.056934] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:53.925 12:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.925 12:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:53.925 12:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.925 12:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.925 [2024-11-19 12:03:57.064910] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:53.925 [2024-11-19 12:03:57.066762] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:53.925 [2024-11-19 12:03:57.066804] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:53.925 [2024-11-19 12:03:57.066814] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:53.925 [2024-11-19 12:03:57.066825] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:53.925 [2024-11-19 12:03:57.066832] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:53.925 [2024-11-19 12:03:57.066840] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:53.925 12:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.925 12:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:53.925 12:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:53.925 12:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:53.925 12:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:53.925 12:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:53.925 12:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:53.925 12:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:53.925 12:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:53.925 12:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.925 12:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.925 12:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.925 12:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.925 12:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:53.925 12:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.925 12:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.925 12:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.925 12:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.925 12:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.925 "name": "Existed_Raid", 00:11:53.925 "uuid": "1351f0dc-9be3-4404-9316-cf3897983641", 00:11:53.925 "strip_size_kb": 0, 00:11:53.925 "state": "configuring", 00:11:53.925 "raid_level": "raid1", 00:11:53.925 "superblock": true, 00:11:53.925 "num_base_bdevs": 4, 00:11:53.925 "num_base_bdevs_discovered": 1, 00:11:53.925 "num_base_bdevs_operational": 4, 00:11:53.925 "base_bdevs_list": [ 00:11:53.925 { 00:11:53.925 "name": "BaseBdev1", 00:11:53.925 "uuid": "3f54d667-19b3-4645-8601-d7d5aaf340a6", 00:11:53.925 "is_configured": true, 00:11:53.925 "data_offset": 2048, 00:11:53.925 "data_size": 63488 00:11:53.925 }, 00:11:53.925 { 00:11:53.925 "name": "BaseBdev2", 00:11:53.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.925 "is_configured": false, 00:11:53.925 "data_offset": 0, 00:11:53.925 "data_size": 0 00:11:53.925 }, 00:11:53.925 { 00:11:53.925 "name": "BaseBdev3", 00:11:53.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.925 "is_configured": false, 00:11:53.925 "data_offset": 0, 00:11:53.925 "data_size": 0 00:11:53.925 }, 00:11:53.925 { 00:11:53.925 "name": "BaseBdev4", 00:11:53.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.925 "is_configured": false, 00:11:53.925 "data_offset": 0, 00:11:53.925 "data_size": 0 00:11:53.925 } 00:11:53.925 ] 00:11:53.925 }' 00:11:53.925 12:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.925 12:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.183 12:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:54.183 12:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.183 12:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.443 [2024-11-19 12:03:57.570876] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:54.443 BaseBdev2 00:11:54.443 12:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.443 12:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:54.443 12:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:54.443 12:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:54.443 12:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:54.443 12:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:54.443 12:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:54.443 12:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:54.443 12:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.443 12:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.443 12:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.443 12:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:54.443 12:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.443 12:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.443 [ 00:11:54.443 { 00:11:54.443 "name": "BaseBdev2", 00:11:54.443 "aliases": [ 00:11:54.443 "36e0c5c9-5915-4402-a215-81fcb502b07e" 00:11:54.443 ], 00:11:54.443 "product_name": "Malloc disk", 00:11:54.443 "block_size": 512, 00:11:54.443 "num_blocks": 65536, 00:11:54.443 "uuid": "36e0c5c9-5915-4402-a215-81fcb502b07e", 00:11:54.443 "assigned_rate_limits": { 00:11:54.443 "rw_ios_per_sec": 0, 00:11:54.443 "rw_mbytes_per_sec": 0, 00:11:54.443 "r_mbytes_per_sec": 0, 00:11:54.443 "w_mbytes_per_sec": 0 00:11:54.443 }, 00:11:54.443 "claimed": true, 00:11:54.443 "claim_type": "exclusive_write", 00:11:54.443 "zoned": false, 00:11:54.443 "supported_io_types": { 00:11:54.443 "read": true, 00:11:54.443 "write": true, 00:11:54.443 "unmap": true, 00:11:54.443 "flush": true, 00:11:54.443 "reset": true, 00:11:54.443 "nvme_admin": false, 00:11:54.443 "nvme_io": false, 00:11:54.443 "nvme_io_md": false, 00:11:54.443 "write_zeroes": true, 00:11:54.443 "zcopy": true, 00:11:54.443 "get_zone_info": false, 00:11:54.443 "zone_management": false, 00:11:54.443 "zone_append": false, 00:11:54.443 "compare": false, 00:11:54.443 "compare_and_write": false, 00:11:54.443 "abort": true, 00:11:54.443 "seek_hole": false, 00:11:54.443 "seek_data": false, 00:11:54.443 "copy": true, 00:11:54.443 "nvme_iov_md": false 00:11:54.443 }, 00:11:54.443 "memory_domains": [ 00:11:54.443 { 00:11:54.443 "dma_device_id": "system", 00:11:54.443 "dma_device_type": 1 00:11:54.443 }, 00:11:54.443 { 00:11:54.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.443 "dma_device_type": 2 00:11:54.443 } 00:11:54.443 ], 00:11:54.443 "driver_specific": {} 00:11:54.443 } 00:11:54.443 ] 00:11:54.443 12:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.443 12:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:54.443 12:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:54.443 12:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:54.443 12:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:54.443 12:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:54.443 12:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:54.443 12:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:54.443 12:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:54.443 12:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:54.444 12:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.444 12:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.444 12:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.444 12:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.444 12:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.444 12:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:54.444 12:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.444 12:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.444 12:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.444 12:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:54.444 "name": "Existed_Raid", 00:11:54.444 "uuid": "1351f0dc-9be3-4404-9316-cf3897983641", 00:11:54.444 "strip_size_kb": 0, 00:11:54.444 "state": "configuring", 00:11:54.444 "raid_level": "raid1", 00:11:54.444 "superblock": true, 00:11:54.444 "num_base_bdevs": 4, 00:11:54.444 "num_base_bdevs_discovered": 2, 00:11:54.444 "num_base_bdevs_operational": 4, 00:11:54.444 "base_bdevs_list": [ 00:11:54.444 { 00:11:54.444 "name": "BaseBdev1", 00:11:54.444 "uuid": "3f54d667-19b3-4645-8601-d7d5aaf340a6", 00:11:54.444 "is_configured": true, 00:11:54.444 "data_offset": 2048, 00:11:54.444 "data_size": 63488 00:11:54.444 }, 00:11:54.444 { 00:11:54.444 "name": "BaseBdev2", 00:11:54.444 "uuid": "36e0c5c9-5915-4402-a215-81fcb502b07e", 00:11:54.444 "is_configured": true, 00:11:54.444 "data_offset": 2048, 00:11:54.444 "data_size": 63488 00:11:54.444 }, 00:11:54.444 { 00:11:54.444 "name": "BaseBdev3", 00:11:54.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.444 "is_configured": false, 00:11:54.444 "data_offset": 0, 00:11:54.444 "data_size": 0 00:11:54.444 }, 00:11:54.444 { 00:11:54.444 "name": "BaseBdev4", 00:11:54.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.444 "is_configured": false, 00:11:54.444 "data_offset": 0, 00:11:54.444 "data_size": 0 00:11:54.444 } 00:11:54.444 ] 00:11:54.444 }' 00:11:54.444 12:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:54.444 12:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.703 12:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:54.703 12:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.703 12:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.703 [2024-11-19 12:03:58.069530] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:54.703 BaseBdev3 00:11:54.703 12:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.703 12:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:54.703 12:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:54.703 12:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:54.703 12:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:54.703 12:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:54.703 12:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:54.703 12:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:54.703 12:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.703 12:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.962 12:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.962 12:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:54.962 12:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.962 12:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.962 [ 00:11:54.962 { 00:11:54.962 "name": "BaseBdev3", 00:11:54.962 "aliases": [ 00:11:54.962 "1d94d2ca-fb5a-400c-b928-e82c416c3dae" 00:11:54.962 ], 00:11:54.962 "product_name": "Malloc disk", 00:11:54.962 "block_size": 512, 00:11:54.962 "num_blocks": 65536, 00:11:54.962 "uuid": "1d94d2ca-fb5a-400c-b928-e82c416c3dae", 00:11:54.962 "assigned_rate_limits": { 00:11:54.962 "rw_ios_per_sec": 0, 00:11:54.962 "rw_mbytes_per_sec": 0, 00:11:54.962 "r_mbytes_per_sec": 0, 00:11:54.962 "w_mbytes_per_sec": 0 00:11:54.962 }, 00:11:54.962 "claimed": true, 00:11:54.962 "claim_type": "exclusive_write", 00:11:54.962 "zoned": false, 00:11:54.962 "supported_io_types": { 00:11:54.962 "read": true, 00:11:54.962 "write": true, 00:11:54.962 "unmap": true, 00:11:54.962 "flush": true, 00:11:54.962 "reset": true, 00:11:54.962 "nvme_admin": false, 00:11:54.962 "nvme_io": false, 00:11:54.962 "nvme_io_md": false, 00:11:54.962 "write_zeroes": true, 00:11:54.962 "zcopy": true, 00:11:54.962 "get_zone_info": false, 00:11:54.962 "zone_management": false, 00:11:54.962 "zone_append": false, 00:11:54.962 "compare": false, 00:11:54.962 "compare_and_write": false, 00:11:54.962 "abort": true, 00:11:54.962 "seek_hole": false, 00:11:54.962 "seek_data": false, 00:11:54.962 "copy": true, 00:11:54.962 "nvme_iov_md": false 00:11:54.962 }, 00:11:54.962 "memory_domains": [ 00:11:54.962 { 00:11:54.962 "dma_device_id": "system", 00:11:54.962 "dma_device_type": 1 00:11:54.962 }, 00:11:54.962 { 00:11:54.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.963 "dma_device_type": 2 00:11:54.963 } 00:11:54.963 ], 00:11:54.963 "driver_specific": {} 00:11:54.963 } 00:11:54.963 ] 00:11:54.963 12:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.963 12:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:54.963 12:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:54.963 12:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:54.963 12:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:54.963 12:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:54.963 12:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:54.963 12:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:54.963 12:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:54.963 12:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:54.963 12:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.963 12:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.963 12:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.963 12:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.963 12:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.963 12:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:54.963 12:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.963 12:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.963 12:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.963 12:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:54.963 "name": "Existed_Raid", 00:11:54.963 "uuid": "1351f0dc-9be3-4404-9316-cf3897983641", 00:11:54.963 "strip_size_kb": 0, 00:11:54.963 "state": "configuring", 00:11:54.963 "raid_level": "raid1", 00:11:54.963 "superblock": true, 00:11:54.963 "num_base_bdevs": 4, 00:11:54.963 "num_base_bdevs_discovered": 3, 00:11:54.963 "num_base_bdevs_operational": 4, 00:11:54.963 "base_bdevs_list": [ 00:11:54.963 { 00:11:54.963 "name": "BaseBdev1", 00:11:54.963 "uuid": "3f54d667-19b3-4645-8601-d7d5aaf340a6", 00:11:54.963 "is_configured": true, 00:11:54.963 "data_offset": 2048, 00:11:54.963 "data_size": 63488 00:11:54.963 }, 00:11:54.963 { 00:11:54.963 "name": "BaseBdev2", 00:11:54.963 "uuid": "36e0c5c9-5915-4402-a215-81fcb502b07e", 00:11:54.963 "is_configured": true, 00:11:54.963 "data_offset": 2048, 00:11:54.963 "data_size": 63488 00:11:54.963 }, 00:11:54.963 { 00:11:54.963 "name": "BaseBdev3", 00:11:54.963 "uuid": "1d94d2ca-fb5a-400c-b928-e82c416c3dae", 00:11:54.963 "is_configured": true, 00:11:54.963 "data_offset": 2048, 00:11:54.963 "data_size": 63488 00:11:54.963 }, 00:11:54.963 { 00:11:54.963 "name": "BaseBdev4", 00:11:54.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.963 "is_configured": false, 00:11:54.963 "data_offset": 0, 00:11:54.963 "data_size": 0 00:11:54.963 } 00:11:54.963 ] 00:11:54.963 }' 00:11:54.963 12:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:54.963 12:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.223 12:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:55.223 12:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.223 12:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.223 [2024-11-19 12:03:58.571811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:55.223 [2024-11-19 12:03:58.572241] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:55.223 [2024-11-19 12:03:58.572301] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:55.223 [2024-11-19 12:03:58.572637] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:55.223 BaseBdev4 00:11:55.223 [2024-11-19 12:03:58.572853] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:55.223 [2024-11-19 12:03:58.572881] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:55.223 [2024-11-19 12:03:58.573070] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:55.223 12:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.223 12:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:55.223 12:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:55.223 12:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:55.223 12:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:55.223 12:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:55.223 12:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:55.223 12:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:55.223 12:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.223 12:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.223 12:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.223 12:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:55.223 12:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.223 12:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.484 [ 00:11:55.484 { 00:11:55.484 "name": "BaseBdev4", 00:11:55.484 "aliases": [ 00:11:55.484 "cd3cc27e-2f39-45df-8ea7-36145ce7d9ac" 00:11:55.484 ], 00:11:55.484 "product_name": "Malloc disk", 00:11:55.484 "block_size": 512, 00:11:55.484 "num_blocks": 65536, 00:11:55.484 "uuid": "cd3cc27e-2f39-45df-8ea7-36145ce7d9ac", 00:11:55.484 "assigned_rate_limits": { 00:11:55.484 "rw_ios_per_sec": 0, 00:11:55.484 "rw_mbytes_per_sec": 0, 00:11:55.484 "r_mbytes_per_sec": 0, 00:11:55.484 "w_mbytes_per_sec": 0 00:11:55.484 }, 00:11:55.484 "claimed": true, 00:11:55.484 "claim_type": "exclusive_write", 00:11:55.484 "zoned": false, 00:11:55.484 "supported_io_types": { 00:11:55.484 "read": true, 00:11:55.484 "write": true, 00:11:55.484 "unmap": true, 00:11:55.484 "flush": true, 00:11:55.484 "reset": true, 00:11:55.484 "nvme_admin": false, 00:11:55.484 "nvme_io": false, 00:11:55.484 "nvme_io_md": false, 00:11:55.484 "write_zeroes": true, 00:11:55.484 "zcopy": true, 00:11:55.484 "get_zone_info": false, 00:11:55.484 "zone_management": false, 00:11:55.484 "zone_append": false, 00:11:55.484 "compare": false, 00:11:55.484 "compare_and_write": false, 00:11:55.484 "abort": true, 00:11:55.484 "seek_hole": false, 00:11:55.484 "seek_data": false, 00:11:55.484 "copy": true, 00:11:55.484 "nvme_iov_md": false 00:11:55.484 }, 00:11:55.484 "memory_domains": [ 00:11:55.484 { 00:11:55.484 "dma_device_id": "system", 00:11:55.484 "dma_device_type": 1 00:11:55.484 }, 00:11:55.484 { 00:11:55.484 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.484 "dma_device_type": 2 00:11:55.484 } 00:11:55.484 ], 00:11:55.484 "driver_specific": {} 00:11:55.484 } 00:11:55.484 ] 00:11:55.484 12:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.484 12:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:55.484 12:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:55.484 12:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:55.484 12:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:55.484 12:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:55.484 12:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:55.484 12:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:55.484 12:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:55.484 12:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:55.484 12:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.484 12:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.484 12:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.484 12:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.484 12:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.484 12:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.484 12:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:55.484 12:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.484 12:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.484 12:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.484 "name": "Existed_Raid", 00:11:55.484 "uuid": "1351f0dc-9be3-4404-9316-cf3897983641", 00:11:55.484 "strip_size_kb": 0, 00:11:55.484 "state": "online", 00:11:55.484 "raid_level": "raid1", 00:11:55.484 "superblock": true, 00:11:55.484 "num_base_bdevs": 4, 00:11:55.484 "num_base_bdevs_discovered": 4, 00:11:55.484 "num_base_bdevs_operational": 4, 00:11:55.484 "base_bdevs_list": [ 00:11:55.484 { 00:11:55.484 "name": "BaseBdev1", 00:11:55.484 "uuid": "3f54d667-19b3-4645-8601-d7d5aaf340a6", 00:11:55.484 "is_configured": true, 00:11:55.484 "data_offset": 2048, 00:11:55.484 "data_size": 63488 00:11:55.484 }, 00:11:55.484 { 00:11:55.484 "name": "BaseBdev2", 00:11:55.484 "uuid": "36e0c5c9-5915-4402-a215-81fcb502b07e", 00:11:55.484 "is_configured": true, 00:11:55.484 "data_offset": 2048, 00:11:55.484 "data_size": 63488 00:11:55.484 }, 00:11:55.484 { 00:11:55.484 "name": "BaseBdev3", 00:11:55.484 "uuid": "1d94d2ca-fb5a-400c-b928-e82c416c3dae", 00:11:55.484 "is_configured": true, 00:11:55.484 "data_offset": 2048, 00:11:55.484 "data_size": 63488 00:11:55.484 }, 00:11:55.484 { 00:11:55.484 "name": "BaseBdev4", 00:11:55.484 "uuid": "cd3cc27e-2f39-45df-8ea7-36145ce7d9ac", 00:11:55.484 "is_configured": true, 00:11:55.484 "data_offset": 2048, 00:11:55.484 "data_size": 63488 00:11:55.484 } 00:11:55.484 ] 00:11:55.484 }' 00:11:55.484 12:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.484 12:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.743 12:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:55.743 12:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:55.743 12:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:55.743 12:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:55.743 12:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:55.743 12:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:55.743 12:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:55.743 12:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.743 12:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:55.743 12:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.743 [2024-11-19 12:03:59.063472] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:55.743 12:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.743 12:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:55.743 "name": "Existed_Raid", 00:11:55.743 "aliases": [ 00:11:55.743 "1351f0dc-9be3-4404-9316-cf3897983641" 00:11:55.743 ], 00:11:55.743 "product_name": "Raid Volume", 00:11:55.743 "block_size": 512, 00:11:55.743 "num_blocks": 63488, 00:11:55.743 "uuid": "1351f0dc-9be3-4404-9316-cf3897983641", 00:11:55.743 "assigned_rate_limits": { 00:11:55.743 "rw_ios_per_sec": 0, 00:11:55.743 "rw_mbytes_per_sec": 0, 00:11:55.743 "r_mbytes_per_sec": 0, 00:11:55.744 "w_mbytes_per_sec": 0 00:11:55.744 }, 00:11:55.744 "claimed": false, 00:11:55.744 "zoned": false, 00:11:55.744 "supported_io_types": { 00:11:55.744 "read": true, 00:11:55.744 "write": true, 00:11:55.744 "unmap": false, 00:11:55.744 "flush": false, 00:11:55.744 "reset": true, 00:11:55.744 "nvme_admin": false, 00:11:55.744 "nvme_io": false, 00:11:55.744 "nvme_io_md": false, 00:11:55.744 "write_zeroes": true, 00:11:55.744 "zcopy": false, 00:11:55.744 "get_zone_info": false, 00:11:55.744 "zone_management": false, 00:11:55.744 "zone_append": false, 00:11:55.744 "compare": false, 00:11:55.744 "compare_and_write": false, 00:11:55.744 "abort": false, 00:11:55.744 "seek_hole": false, 00:11:55.744 "seek_data": false, 00:11:55.744 "copy": false, 00:11:55.744 "nvme_iov_md": false 00:11:55.744 }, 00:11:55.744 "memory_domains": [ 00:11:55.744 { 00:11:55.744 "dma_device_id": "system", 00:11:55.744 "dma_device_type": 1 00:11:55.744 }, 00:11:55.744 { 00:11:55.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.744 "dma_device_type": 2 00:11:55.744 }, 00:11:55.744 { 00:11:55.744 "dma_device_id": "system", 00:11:55.744 "dma_device_type": 1 00:11:55.744 }, 00:11:55.744 { 00:11:55.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.744 "dma_device_type": 2 00:11:55.744 }, 00:11:55.744 { 00:11:55.744 "dma_device_id": "system", 00:11:55.744 "dma_device_type": 1 00:11:55.744 }, 00:11:55.744 { 00:11:55.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.744 "dma_device_type": 2 00:11:55.744 }, 00:11:55.744 { 00:11:55.744 "dma_device_id": "system", 00:11:55.744 "dma_device_type": 1 00:11:55.744 }, 00:11:55.744 { 00:11:55.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.744 "dma_device_type": 2 00:11:55.744 } 00:11:55.744 ], 00:11:55.744 "driver_specific": { 00:11:55.744 "raid": { 00:11:55.744 "uuid": "1351f0dc-9be3-4404-9316-cf3897983641", 00:11:55.744 "strip_size_kb": 0, 00:11:55.744 "state": "online", 00:11:55.744 "raid_level": "raid1", 00:11:55.744 "superblock": true, 00:11:55.744 "num_base_bdevs": 4, 00:11:55.744 "num_base_bdevs_discovered": 4, 00:11:55.744 "num_base_bdevs_operational": 4, 00:11:55.744 "base_bdevs_list": [ 00:11:55.744 { 00:11:55.744 "name": "BaseBdev1", 00:11:55.744 "uuid": "3f54d667-19b3-4645-8601-d7d5aaf340a6", 00:11:55.744 "is_configured": true, 00:11:55.744 "data_offset": 2048, 00:11:55.744 "data_size": 63488 00:11:55.744 }, 00:11:55.744 { 00:11:55.744 "name": "BaseBdev2", 00:11:55.744 "uuid": "36e0c5c9-5915-4402-a215-81fcb502b07e", 00:11:55.744 "is_configured": true, 00:11:55.744 "data_offset": 2048, 00:11:55.744 "data_size": 63488 00:11:55.744 }, 00:11:55.744 { 00:11:55.744 "name": "BaseBdev3", 00:11:55.744 "uuid": "1d94d2ca-fb5a-400c-b928-e82c416c3dae", 00:11:55.744 "is_configured": true, 00:11:55.744 "data_offset": 2048, 00:11:55.744 "data_size": 63488 00:11:55.744 }, 00:11:55.744 { 00:11:55.744 "name": "BaseBdev4", 00:11:55.744 "uuid": "cd3cc27e-2f39-45df-8ea7-36145ce7d9ac", 00:11:55.744 "is_configured": true, 00:11:55.744 "data_offset": 2048, 00:11:55.744 "data_size": 63488 00:11:55.744 } 00:11:55.744 ] 00:11:55.744 } 00:11:55.744 } 00:11:55.744 }' 00:11:55.744 12:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:56.003 12:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:56.003 BaseBdev2 00:11:56.003 BaseBdev3 00:11:56.003 BaseBdev4' 00:11:56.003 12:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:56.003 12:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:56.003 12:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:56.003 12:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:56.003 12:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:56.003 12:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.003 12:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.003 12:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.003 12:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:56.003 12:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:56.003 12:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:56.003 12:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:56.003 12:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.003 12:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.003 12:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:56.003 12:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.003 12:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:56.003 12:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:56.003 12:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:56.003 12:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:56.003 12:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:56.003 12:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.003 12:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.003 12:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.003 12:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:56.003 12:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:56.003 12:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:56.003 12:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:56.003 12:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:56.003 12:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.003 12:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.003 12:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.003 12:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:56.003 12:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:56.003 12:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:56.003 12:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.003 12:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.262 [2024-11-19 12:03:59.382606] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:56.262 12:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.262 12:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:56.262 12:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:56.262 12:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:56.262 12:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:11:56.262 12:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:56.262 12:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:56.262 12:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:56.262 12:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:56.262 12:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:56.262 12:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:56.262 12:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:56.262 12:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.262 12:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.262 12:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.262 12:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.262 12:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.262 12:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:56.262 12:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.262 12:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.262 12:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.262 12:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.262 "name": "Existed_Raid", 00:11:56.262 "uuid": "1351f0dc-9be3-4404-9316-cf3897983641", 00:11:56.262 "strip_size_kb": 0, 00:11:56.262 "state": "online", 00:11:56.262 "raid_level": "raid1", 00:11:56.262 "superblock": true, 00:11:56.262 "num_base_bdevs": 4, 00:11:56.262 "num_base_bdevs_discovered": 3, 00:11:56.262 "num_base_bdevs_operational": 3, 00:11:56.262 "base_bdevs_list": [ 00:11:56.262 { 00:11:56.262 "name": null, 00:11:56.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.262 "is_configured": false, 00:11:56.262 "data_offset": 0, 00:11:56.262 "data_size": 63488 00:11:56.262 }, 00:11:56.262 { 00:11:56.262 "name": "BaseBdev2", 00:11:56.262 "uuid": "36e0c5c9-5915-4402-a215-81fcb502b07e", 00:11:56.262 "is_configured": true, 00:11:56.262 "data_offset": 2048, 00:11:56.262 "data_size": 63488 00:11:56.262 }, 00:11:56.262 { 00:11:56.262 "name": "BaseBdev3", 00:11:56.262 "uuid": "1d94d2ca-fb5a-400c-b928-e82c416c3dae", 00:11:56.262 "is_configured": true, 00:11:56.262 "data_offset": 2048, 00:11:56.262 "data_size": 63488 00:11:56.262 }, 00:11:56.262 { 00:11:56.262 "name": "BaseBdev4", 00:11:56.262 "uuid": "cd3cc27e-2f39-45df-8ea7-36145ce7d9ac", 00:11:56.262 "is_configured": true, 00:11:56.262 "data_offset": 2048, 00:11:56.262 "data_size": 63488 00:11:56.262 } 00:11:56.262 ] 00:11:56.262 }' 00:11:56.262 12:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.262 12:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.522 12:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:56.522 12:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:56.782 12:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.782 12:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:56.782 12:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.782 12:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.782 12:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.782 12:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:56.782 12:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:56.782 12:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:56.782 12:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.782 12:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.782 [2024-11-19 12:03:59.952529] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:56.782 12:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.782 12:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:56.782 12:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:56.782 12:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.782 12:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:56.782 12:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.782 12:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.782 12:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.782 12:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:56.782 12:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:56.782 12:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:56.782 12:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.782 12:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.782 [2024-11-19 12:04:00.107090] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:57.042 12:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.042 12:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:57.042 12:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:57.042 12:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.042 12:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:57.042 12:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.042 12:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.042 12:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.042 12:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:57.042 12:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:57.042 12:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:57.042 12:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.042 12:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.042 [2024-11-19 12:04:00.259256] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:57.042 [2024-11-19 12:04:00.259449] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:57.042 [2024-11-19 12:04:00.357639] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:57.042 [2024-11-19 12:04:00.357705] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:57.042 [2024-11-19 12:04:00.357718] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:57.042 12:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.042 12:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:57.042 12:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:57.042 12:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.042 12:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:57.042 12:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.042 12:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.042 12:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.042 12:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:57.042 12:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:57.042 12:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:57.042 12:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:57.042 12:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:57.042 12:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:57.042 12:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.042 12:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.303 BaseBdev2 00:11:57.303 12:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.303 12:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:57.303 12:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:57.303 12:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:57.303 12:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:57.303 12:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:57.303 12:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:57.303 12:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:57.303 12:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.303 12:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.303 12:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.303 12:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:57.303 12:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.303 12:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.303 [ 00:11:57.303 { 00:11:57.303 "name": "BaseBdev2", 00:11:57.303 "aliases": [ 00:11:57.303 "bf066766-1053-4c0d-b3ec-9d10a758d753" 00:11:57.303 ], 00:11:57.303 "product_name": "Malloc disk", 00:11:57.303 "block_size": 512, 00:11:57.303 "num_blocks": 65536, 00:11:57.303 "uuid": "bf066766-1053-4c0d-b3ec-9d10a758d753", 00:11:57.303 "assigned_rate_limits": { 00:11:57.303 "rw_ios_per_sec": 0, 00:11:57.303 "rw_mbytes_per_sec": 0, 00:11:57.303 "r_mbytes_per_sec": 0, 00:11:57.303 "w_mbytes_per_sec": 0 00:11:57.303 }, 00:11:57.303 "claimed": false, 00:11:57.303 "zoned": false, 00:11:57.303 "supported_io_types": { 00:11:57.303 "read": true, 00:11:57.303 "write": true, 00:11:57.303 "unmap": true, 00:11:57.303 "flush": true, 00:11:57.303 "reset": true, 00:11:57.303 "nvme_admin": false, 00:11:57.303 "nvme_io": false, 00:11:57.303 "nvme_io_md": false, 00:11:57.303 "write_zeroes": true, 00:11:57.303 "zcopy": true, 00:11:57.303 "get_zone_info": false, 00:11:57.303 "zone_management": false, 00:11:57.303 "zone_append": false, 00:11:57.303 "compare": false, 00:11:57.303 "compare_and_write": false, 00:11:57.303 "abort": true, 00:11:57.303 "seek_hole": false, 00:11:57.303 "seek_data": false, 00:11:57.303 "copy": true, 00:11:57.303 "nvme_iov_md": false 00:11:57.303 }, 00:11:57.303 "memory_domains": [ 00:11:57.303 { 00:11:57.303 "dma_device_id": "system", 00:11:57.303 "dma_device_type": 1 00:11:57.303 }, 00:11:57.303 { 00:11:57.303 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.303 "dma_device_type": 2 00:11:57.303 } 00:11:57.303 ], 00:11:57.303 "driver_specific": {} 00:11:57.303 } 00:11:57.303 ] 00:11:57.303 12:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.303 12:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:57.303 12:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:57.303 12:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:57.303 12:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:57.303 12:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.303 12:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.303 BaseBdev3 00:11:57.303 12:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.303 12:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:57.303 12:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:57.303 12:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:57.303 12:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:57.303 12:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:57.303 12:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:57.303 12:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:57.303 12:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.303 12:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.303 12:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.303 12:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:57.303 12:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.303 12:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.303 [ 00:11:57.304 { 00:11:57.304 "name": "BaseBdev3", 00:11:57.304 "aliases": [ 00:11:57.304 "91e7449d-4d98-4eb7-8ff2-909aca1e7940" 00:11:57.304 ], 00:11:57.304 "product_name": "Malloc disk", 00:11:57.304 "block_size": 512, 00:11:57.304 "num_blocks": 65536, 00:11:57.304 "uuid": "91e7449d-4d98-4eb7-8ff2-909aca1e7940", 00:11:57.304 "assigned_rate_limits": { 00:11:57.304 "rw_ios_per_sec": 0, 00:11:57.304 "rw_mbytes_per_sec": 0, 00:11:57.304 "r_mbytes_per_sec": 0, 00:11:57.304 "w_mbytes_per_sec": 0 00:11:57.304 }, 00:11:57.304 "claimed": false, 00:11:57.304 "zoned": false, 00:11:57.304 "supported_io_types": { 00:11:57.304 "read": true, 00:11:57.304 "write": true, 00:11:57.304 "unmap": true, 00:11:57.304 "flush": true, 00:11:57.304 "reset": true, 00:11:57.304 "nvme_admin": false, 00:11:57.304 "nvme_io": false, 00:11:57.304 "nvme_io_md": false, 00:11:57.304 "write_zeroes": true, 00:11:57.304 "zcopy": true, 00:11:57.304 "get_zone_info": false, 00:11:57.304 "zone_management": false, 00:11:57.304 "zone_append": false, 00:11:57.304 "compare": false, 00:11:57.304 "compare_and_write": false, 00:11:57.304 "abort": true, 00:11:57.304 "seek_hole": false, 00:11:57.304 "seek_data": false, 00:11:57.304 "copy": true, 00:11:57.304 "nvme_iov_md": false 00:11:57.304 }, 00:11:57.304 "memory_domains": [ 00:11:57.304 { 00:11:57.304 "dma_device_id": "system", 00:11:57.304 "dma_device_type": 1 00:11:57.304 }, 00:11:57.304 { 00:11:57.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.304 "dma_device_type": 2 00:11:57.304 } 00:11:57.304 ], 00:11:57.304 "driver_specific": {} 00:11:57.304 } 00:11:57.304 ] 00:11:57.304 12:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.304 12:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:57.304 12:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:57.304 12:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:57.304 12:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:57.304 12:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.304 12:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.304 BaseBdev4 00:11:57.304 12:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.304 12:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:57.304 12:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:57.304 12:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:57.304 12:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:57.304 12:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:57.304 12:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:57.304 12:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:57.304 12:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.304 12:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.304 12:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.304 12:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:57.304 12:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.304 12:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.304 [ 00:11:57.304 { 00:11:57.304 "name": "BaseBdev4", 00:11:57.304 "aliases": [ 00:11:57.304 "1181704b-0632-46be-addd-9f9d6a318690" 00:11:57.304 ], 00:11:57.304 "product_name": "Malloc disk", 00:11:57.304 "block_size": 512, 00:11:57.304 "num_blocks": 65536, 00:11:57.304 "uuid": "1181704b-0632-46be-addd-9f9d6a318690", 00:11:57.304 "assigned_rate_limits": { 00:11:57.304 "rw_ios_per_sec": 0, 00:11:57.304 "rw_mbytes_per_sec": 0, 00:11:57.304 "r_mbytes_per_sec": 0, 00:11:57.304 "w_mbytes_per_sec": 0 00:11:57.304 }, 00:11:57.304 "claimed": false, 00:11:57.304 "zoned": false, 00:11:57.304 "supported_io_types": { 00:11:57.304 "read": true, 00:11:57.304 "write": true, 00:11:57.304 "unmap": true, 00:11:57.304 "flush": true, 00:11:57.304 "reset": true, 00:11:57.304 "nvme_admin": false, 00:11:57.304 "nvme_io": false, 00:11:57.304 "nvme_io_md": false, 00:11:57.304 "write_zeroes": true, 00:11:57.304 "zcopy": true, 00:11:57.304 "get_zone_info": false, 00:11:57.304 "zone_management": false, 00:11:57.304 "zone_append": false, 00:11:57.304 "compare": false, 00:11:57.304 "compare_and_write": false, 00:11:57.304 "abort": true, 00:11:57.304 "seek_hole": false, 00:11:57.304 "seek_data": false, 00:11:57.304 "copy": true, 00:11:57.304 "nvme_iov_md": false 00:11:57.304 }, 00:11:57.304 "memory_domains": [ 00:11:57.304 { 00:11:57.304 "dma_device_id": "system", 00:11:57.304 "dma_device_type": 1 00:11:57.304 }, 00:11:57.304 { 00:11:57.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.304 "dma_device_type": 2 00:11:57.304 } 00:11:57.304 ], 00:11:57.304 "driver_specific": {} 00:11:57.304 } 00:11:57.304 ] 00:11:57.304 12:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.304 12:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:57.304 12:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:57.304 12:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:57.304 12:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:57.304 12:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.304 12:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.304 [2024-11-19 12:04:00.664204] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:57.304 [2024-11-19 12:04:00.664299] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:57.304 [2024-11-19 12:04:00.664350] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:57.304 [2024-11-19 12:04:00.666332] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:57.304 [2024-11-19 12:04:00.666431] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:57.304 12:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.304 12:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:57.304 12:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:57.304 12:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:57.304 12:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:57.304 12:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:57.304 12:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:57.304 12:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.304 12:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.304 12:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.304 12:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.564 12:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.564 12:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.564 12:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:57.564 12:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.565 12:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.565 12:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.565 "name": "Existed_Raid", 00:11:57.565 "uuid": "ce11a0d8-253d-4849-8e57-65600a8d1f87", 00:11:57.565 "strip_size_kb": 0, 00:11:57.565 "state": "configuring", 00:11:57.565 "raid_level": "raid1", 00:11:57.565 "superblock": true, 00:11:57.565 "num_base_bdevs": 4, 00:11:57.565 "num_base_bdevs_discovered": 3, 00:11:57.565 "num_base_bdevs_operational": 4, 00:11:57.565 "base_bdevs_list": [ 00:11:57.565 { 00:11:57.565 "name": "BaseBdev1", 00:11:57.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.565 "is_configured": false, 00:11:57.565 "data_offset": 0, 00:11:57.565 "data_size": 0 00:11:57.565 }, 00:11:57.565 { 00:11:57.565 "name": "BaseBdev2", 00:11:57.565 "uuid": "bf066766-1053-4c0d-b3ec-9d10a758d753", 00:11:57.565 "is_configured": true, 00:11:57.565 "data_offset": 2048, 00:11:57.565 "data_size": 63488 00:11:57.565 }, 00:11:57.565 { 00:11:57.565 "name": "BaseBdev3", 00:11:57.565 "uuid": "91e7449d-4d98-4eb7-8ff2-909aca1e7940", 00:11:57.565 "is_configured": true, 00:11:57.565 "data_offset": 2048, 00:11:57.565 "data_size": 63488 00:11:57.565 }, 00:11:57.565 { 00:11:57.565 "name": "BaseBdev4", 00:11:57.565 "uuid": "1181704b-0632-46be-addd-9f9d6a318690", 00:11:57.565 "is_configured": true, 00:11:57.565 "data_offset": 2048, 00:11:57.565 "data_size": 63488 00:11:57.565 } 00:11:57.565 ] 00:11:57.565 }' 00:11:57.565 12:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.565 12:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.825 12:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:57.825 12:04:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.825 12:04:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.825 [2024-11-19 12:04:01.151424] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:57.825 12:04:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.825 12:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:57.825 12:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:57.825 12:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:57.825 12:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:57.825 12:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:57.825 12:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:57.825 12:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.825 12:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.825 12:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.825 12:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.825 12:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.825 12:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:57.825 12:04:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.825 12:04:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.825 12:04:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.085 12:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.085 "name": "Existed_Raid", 00:11:58.085 "uuid": "ce11a0d8-253d-4849-8e57-65600a8d1f87", 00:11:58.085 "strip_size_kb": 0, 00:11:58.085 "state": "configuring", 00:11:58.085 "raid_level": "raid1", 00:11:58.085 "superblock": true, 00:11:58.085 "num_base_bdevs": 4, 00:11:58.085 "num_base_bdevs_discovered": 2, 00:11:58.085 "num_base_bdevs_operational": 4, 00:11:58.085 "base_bdevs_list": [ 00:11:58.085 { 00:11:58.085 "name": "BaseBdev1", 00:11:58.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.085 "is_configured": false, 00:11:58.085 "data_offset": 0, 00:11:58.085 "data_size": 0 00:11:58.085 }, 00:11:58.085 { 00:11:58.085 "name": null, 00:11:58.085 "uuid": "bf066766-1053-4c0d-b3ec-9d10a758d753", 00:11:58.085 "is_configured": false, 00:11:58.085 "data_offset": 0, 00:11:58.085 "data_size": 63488 00:11:58.085 }, 00:11:58.085 { 00:11:58.085 "name": "BaseBdev3", 00:11:58.085 "uuid": "91e7449d-4d98-4eb7-8ff2-909aca1e7940", 00:11:58.085 "is_configured": true, 00:11:58.085 "data_offset": 2048, 00:11:58.085 "data_size": 63488 00:11:58.085 }, 00:11:58.085 { 00:11:58.085 "name": "BaseBdev4", 00:11:58.085 "uuid": "1181704b-0632-46be-addd-9f9d6a318690", 00:11:58.085 "is_configured": true, 00:11:58.085 "data_offset": 2048, 00:11:58.085 "data_size": 63488 00:11:58.085 } 00:11:58.085 ] 00:11:58.085 }' 00:11:58.085 12:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.085 12:04:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.345 12:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.345 12:04:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.345 12:04:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.345 12:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:58.345 12:04:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.345 12:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:58.346 12:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:58.346 12:04:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.346 12:04:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.346 [2024-11-19 12:04:01.690430] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:58.346 BaseBdev1 00:11:58.346 12:04:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.346 12:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:58.346 12:04:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:58.346 12:04:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:58.346 12:04:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:58.346 12:04:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:58.346 12:04:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:58.346 12:04:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:58.346 12:04:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.346 12:04:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.346 12:04:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.346 12:04:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:58.346 12:04:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.346 12:04:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.346 [ 00:11:58.346 { 00:11:58.346 "name": "BaseBdev1", 00:11:58.346 "aliases": [ 00:11:58.346 "7749a435-e79f-4238-b3f1-70b8a1216fad" 00:11:58.346 ], 00:11:58.346 "product_name": "Malloc disk", 00:11:58.346 "block_size": 512, 00:11:58.346 "num_blocks": 65536, 00:11:58.346 "uuid": "7749a435-e79f-4238-b3f1-70b8a1216fad", 00:11:58.346 "assigned_rate_limits": { 00:11:58.346 "rw_ios_per_sec": 0, 00:11:58.346 "rw_mbytes_per_sec": 0, 00:11:58.346 "r_mbytes_per_sec": 0, 00:11:58.346 "w_mbytes_per_sec": 0 00:11:58.346 }, 00:11:58.346 "claimed": true, 00:11:58.346 "claim_type": "exclusive_write", 00:11:58.346 "zoned": false, 00:11:58.346 "supported_io_types": { 00:11:58.606 "read": true, 00:11:58.606 "write": true, 00:11:58.606 "unmap": true, 00:11:58.606 "flush": true, 00:11:58.606 "reset": true, 00:11:58.606 "nvme_admin": false, 00:11:58.606 "nvme_io": false, 00:11:58.606 "nvme_io_md": false, 00:11:58.606 "write_zeroes": true, 00:11:58.606 "zcopy": true, 00:11:58.606 "get_zone_info": false, 00:11:58.606 "zone_management": false, 00:11:58.606 "zone_append": false, 00:11:58.606 "compare": false, 00:11:58.606 "compare_and_write": false, 00:11:58.606 "abort": true, 00:11:58.606 "seek_hole": false, 00:11:58.606 "seek_data": false, 00:11:58.606 "copy": true, 00:11:58.606 "nvme_iov_md": false 00:11:58.606 }, 00:11:58.606 "memory_domains": [ 00:11:58.606 { 00:11:58.606 "dma_device_id": "system", 00:11:58.606 "dma_device_type": 1 00:11:58.606 }, 00:11:58.606 { 00:11:58.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.606 "dma_device_type": 2 00:11:58.606 } 00:11:58.606 ], 00:11:58.606 "driver_specific": {} 00:11:58.606 } 00:11:58.606 ] 00:11:58.606 12:04:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.606 12:04:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:58.606 12:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:58.606 12:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:58.606 12:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:58.606 12:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:58.606 12:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:58.606 12:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:58.606 12:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.606 12:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.606 12:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.606 12:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.606 12:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.606 12:04:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.606 12:04:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.606 12:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:58.606 12:04:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.606 12:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.606 "name": "Existed_Raid", 00:11:58.606 "uuid": "ce11a0d8-253d-4849-8e57-65600a8d1f87", 00:11:58.606 "strip_size_kb": 0, 00:11:58.606 "state": "configuring", 00:11:58.606 "raid_level": "raid1", 00:11:58.606 "superblock": true, 00:11:58.606 "num_base_bdevs": 4, 00:11:58.606 "num_base_bdevs_discovered": 3, 00:11:58.606 "num_base_bdevs_operational": 4, 00:11:58.606 "base_bdevs_list": [ 00:11:58.606 { 00:11:58.606 "name": "BaseBdev1", 00:11:58.606 "uuid": "7749a435-e79f-4238-b3f1-70b8a1216fad", 00:11:58.606 "is_configured": true, 00:11:58.606 "data_offset": 2048, 00:11:58.606 "data_size": 63488 00:11:58.606 }, 00:11:58.606 { 00:11:58.606 "name": null, 00:11:58.606 "uuid": "bf066766-1053-4c0d-b3ec-9d10a758d753", 00:11:58.606 "is_configured": false, 00:11:58.606 "data_offset": 0, 00:11:58.606 "data_size": 63488 00:11:58.606 }, 00:11:58.606 { 00:11:58.606 "name": "BaseBdev3", 00:11:58.606 "uuid": "91e7449d-4d98-4eb7-8ff2-909aca1e7940", 00:11:58.606 "is_configured": true, 00:11:58.606 "data_offset": 2048, 00:11:58.606 "data_size": 63488 00:11:58.606 }, 00:11:58.606 { 00:11:58.606 "name": "BaseBdev4", 00:11:58.606 "uuid": "1181704b-0632-46be-addd-9f9d6a318690", 00:11:58.606 "is_configured": true, 00:11:58.606 "data_offset": 2048, 00:11:58.606 "data_size": 63488 00:11:58.606 } 00:11:58.606 ] 00:11:58.606 }' 00:11:58.606 12:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.606 12:04:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.866 12:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.866 12:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:58.866 12:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.866 12:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.866 12:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.866 12:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:58.866 12:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:58.866 12:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.866 12:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.866 [2024-11-19 12:04:02.169744] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:58.866 12:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.866 12:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:58.866 12:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:58.866 12:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:58.866 12:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:58.866 12:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:58.866 12:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:58.866 12:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.866 12:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.866 12:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.866 12:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.866 12:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.866 12:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:58.866 12:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.866 12:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.866 12:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.866 12:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.866 "name": "Existed_Raid", 00:11:58.866 "uuid": "ce11a0d8-253d-4849-8e57-65600a8d1f87", 00:11:58.866 "strip_size_kb": 0, 00:11:58.866 "state": "configuring", 00:11:58.866 "raid_level": "raid1", 00:11:58.866 "superblock": true, 00:11:58.866 "num_base_bdevs": 4, 00:11:58.866 "num_base_bdevs_discovered": 2, 00:11:58.866 "num_base_bdevs_operational": 4, 00:11:58.866 "base_bdevs_list": [ 00:11:58.866 { 00:11:58.866 "name": "BaseBdev1", 00:11:58.866 "uuid": "7749a435-e79f-4238-b3f1-70b8a1216fad", 00:11:58.866 "is_configured": true, 00:11:58.866 "data_offset": 2048, 00:11:58.866 "data_size": 63488 00:11:58.866 }, 00:11:58.866 { 00:11:58.866 "name": null, 00:11:58.866 "uuid": "bf066766-1053-4c0d-b3ec-9d10a758d753", 00:11:58.866 "is_configured": false, 00:11:58.866 "data_offset": 0, 00:11:58.866 "data_size": 63488 00:11:58.866 }, 00:11:58.866 { 00:11:58.866 "name": null, 00:11:58.866 "uuid": "91e7449d-4d98-4eb7-8ff2-909aca1e7940", 00:11:58.866 "is_configured": false, 00:11:58.866 "data_offset": 0, 00:11:58.866 "data_size": 63488 00:11:58.866 }, 00:11:58.866 { 00:11:58.866 "name": "BaseBdev4", 00:11:58.866 "uuid": "1181704b-0632-46be-addd-9f9d6a318690", 00:11:58.866 "is_configured": true, 00:11:58.866 "data_offset": 2048, 00:11:58.866 "data_size": 63488 00:11:58.866 } 00:11:58.866 ] 00:11:58.866 }' 00:11:58.866 12:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.866 12:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.437 12:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.438 12:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.438 12:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:59.438 12:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.438 12:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.438 12:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:59.438 12:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:59.438 12:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.438 12:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.438 [2024-11-19 12:04:02.708862] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:59.438 12:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.438 12:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:59.438 12:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:59.438 12:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:59.438 12:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:59.438 12:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:59.438 12:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:59.438 12:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.438 12:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.438 12:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.438 12:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.438 12:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.438 12:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.438 12:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.438 12:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:59.438 12:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.438 12:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.438 "name": "Existed_Raid", 00:11:59.438 "uuid": "ce11a0d8-253d-4849-8e57-65600a8d1f87", 00:11:59.438 "strip_size_kb": 0, 00:11:59.438 "state": "configuring", 00:11:59.438 "raid_level": "raid1", 00:11:59.438 "superblock": true, 00:11:59.438 "num_base_bdevs": 4, 00:11:59.438 "num_base_bdevs_discovered": 3, 00:11:59.438 "num_base_bdevs_operational": 4, 00:11:59.438 "base_bdevs_list": [ 00:11:59.438 { 00:11:59.438 "name": "BaseBdev1", 00:11:59.438 "uuid": "7749a435-e79f-4238-b3f1-70b8a1216fad", 00:11:59.438 "is_configured": true, 00:11:59.438 "data_offset": 2048, 00:11:59.438 "data_size": 63488 00:11:59.438 }, 00:11:59.438 { 00:11:59.438 "name": null, 00:11:59.438 "uuid": "bf066766-1053-4c0d-b3ec-9d10a758d753", 00:11:59.438 "is_configured": false, 00:11:59.438 "data_offset": 0, 00:11:59.438 "data_size": 63488 00:11:59.438 }, 00:11:59.438 { 00:11:59.438 "name": "BaseBdev3", 00:11:59.438 "uuid": "91e7449d-4d98-4eb7-8ff2-909aca1e7940", 00:11:59.438 "is_configured": true, 00:11:59.438 "data_offset": 2048, 00:11:59.438 "data_size": 63488 00:11:59.438 }, 00:11:59.438 { 00:11:59.438 "name": "BaseBdev4", 00:11:59.438 "uuid": "1181704b-0632-46be-addd-9f9d6a318690", 00:11:59.438 "is_configured": true, 00:11:59.438 "data_offset": 2048, 00:11:59.438 "data_size": 63488 00:11:59.438 } 00:11:59.438 ] 00:11:59.438 }' 00:11:59.438 12:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.438 12:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.007 12:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.007 12:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:00.007 12:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.007 12:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.007 12:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.007 12:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:00.007 12:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:00.007 12:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.007 12:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.007 [2024-11-19 12:04:03.156108] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:00.007 12:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.007 12:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:00.007 12:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:00.007 12:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:00.007 12:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:00.007 12:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:00.007 12:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:00.007 12:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:00.007 12:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:00.007 12:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:00.007 12:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:00.007 12:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.007 12:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:00.007 12:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.007 12:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.007 12:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.007 12:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.007 "name": "Existed_Raid", 00:12:00.007 "uuid": "ce11a0d8-253d-4849-8e57-65600a8d1f87", 00:12:00.007 "strip_size_kb": 0, 00:12:00.007 "state": "configuring", 00:12:00.007 "raid_level": "raid1", 00:12:00.007 "superblock": true, 00:12:00.007 "num_base_bdevs": 4, 00:12:00.007 "num_base_bdevs_discovered": 2, 00:12:00.007 "num_base_bdevs_operational": 4, 00:12:00.007 "base_bdevs_list": [ 00:12:00.007 { 00:12:00.007 "name": null, 00:12:00.007 "uuid": "7749a435-e79f-4238-b3f1-70b8a1216fad", 00:12:00.007 "is_configured": false, 00:12:00.007 "data_offset": 0, 00:12:00.007 "data_size": 63488 00:12:00.007 }, 00:12:00.007 { 00:12:00.007 "name": null, 00:12:00.007 "uuid": "bf066766-1053-4c0d-b3ec-9d10a758d753", 00:12:00.007 "is_configured": false, 00:12:00.007 "data_offset": 0, 00:12:00.007 "data_size": 63488 00:12:00.007 }, 00:12:00.007 { 00:12:00.007 "name": "BaseBdev3", 00:12:00.007 "uuid": "91e7449d-4d98-4eb7-8ff2-909aca1e7940", 00:12:00.007 "is_configured": true, 00:12:00.007 "data_offset": 2048, 00:12:00.007 "data_size": 63488 00:12:00.007 }, 00:12:00.007 { 00:12:00.007 "name": "BaseBdev4", 00:12:00.007 "uuid": "1181704b-0632-46be-addd-9f9d6a318690", 00:12:00.007 "is_configured": true, 00:12:00.007 "data_offset": 2048, 00:12:00.008 "data_size": 63488 00:12:00.008 } 00:12:00.008 ] 00:12:00.008 }' 00:12:00.008 12:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.008 12:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.576 12:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.576 12:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:00.576 12:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.576 12:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.576 12:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.576 12:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:00.577 12:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:00.577 12:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.577 12:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.577 [2024-11-19 12:04:03.788963] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:00.577 12:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.577 12:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:00.577 12:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:00.577 12:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:00.577 12:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:00.577 12:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:00.577 12:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:00.577 12:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:00.577 12:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:00.577 12:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:00.577 12:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:00.577 12:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.577 12:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:00.577 12:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.577 12:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.577 12:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.577 12:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.577 "name": "Existed_Raid", 00:12:00.577 "uuid": "ce11a0d8-253d-4849-8e57-65600a8d1f87", 00:12:00.577 "strip_size_kb": 0, 00:12:00.577 "state": "configuring", 00:12:00.577 "raid_level": "raid1", 00:12:00.577 "superblock": true, 00:12:00.577 "num_base_bdevs": 4, 00:12:00.577 "num_base_bdevs_discovered": 3, 00:12:00.577 "num_base_bdevs_operational": 4, 00:12:00.577 "base_bdevs_list": [ 00:12:00.577 { 00:12:00.577 "name": null, 00:12:00.577 "uuid": "7749a435-e79f-4238-b3f1-70b8a1216fad", 00:12:00.577 "is_configured": false, 00:12:00.577 "data_offset": 0, 00:12:00.577 "data_size": 63488 00:12:00.577 }, 00:12:00.577 { 00:12:00.577 "name": "BaseBdev2", 00:12:00.577 "uuid": "bf066766-1053-4c0d-b3ec-9d10a758d753", 00:12:00.577 "is_configured": true, 00:12:00.577 "data_offset": 2048, 00:12:00.577 "data_size": 63488 00:12:00.577 }, 00:12:00.577 { 00:12:00.577 "name": "BaseBdev3", 00:12:00.577 "uuid": "91e7449d-4d98-4eb7-8ff2-909aca1e7940", 00:12:00.577 "is_configured": true, 00:12:00.577 "data_offset": 2048, 00:12:00.577 "data_size": 63488 00:12:00.577 }, 00:12:00.577 { 00:12:00.577 "name": "BaseBdev4", 00:12:00.577 "uuid": "1181704b-0632-46be-addd-9f9d6a318690", 00:12:00.577 "is_configured": true, 00:12:00.577 "data_offset": 2048, 00:12:00.577 "data_size": 63488 00:12:00.577 } 00:12:00.577 ] 00:12:00.577 }' 00:12:00.577 12:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.577 12:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.146 12:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.146 12:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:01.146 12:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.146 12:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.146 12:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.146 12:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:01.146 12:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.146 12:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.146 12:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.146 12:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:01.146 12:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.146 12:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7749a435-e79f-4238-b3f1-70b8a1216fad 00:12:01.146 12:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.146 12:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.146 [2024-11-19 12:04:04.395665] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:01.146 [2024-11-19 12:04:04.395909] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:01.146 [2024-11-19 12:04:04.395927] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:01.146 [2024-11-19 12:04:04.396258] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:01.146 [2024-11-19 12:04:04.396421] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:01.146 [2024-11-19 12:04:04.396489] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:01.146 NewBaseBdev 00:12:01.146 [2024-11-19 12:04:04.396744] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:01.146 12:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.146 12:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:01.146 12:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:01.146 12:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:01.146 12:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:01.146 12:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:01.146 12:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:01.146 12:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:01.146 12:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.146 12:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.146 12:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.146 12:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:01.146 12:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.146 12:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.146 [ 00:12:01.146 { 00:12:01.146 "name": "NewBaseBdev", 00:12:01.146 "aliases": [ 00:12:01.146 "7749a435-e79f-4238-b3f1-70b8a1216fad" 00:12:01.146 ], 00:12:01.146 "product_name": "Malloc disk", 00:12:01.146 "block_size": 512, 00:12:01.146 "num_blocks": 65536, 00:12:01.146 "uuid": "7749a435-e79f-4238-b3f1-70b8a1216fad", 00:12:01.146 "assigned_rate_limits": { 00:12:01.146 "rw_ios_per_sec": 0, 00:12:01.146 "rw_mbytes_per_sec": 0, 00:12:01.146 "r_mbytes_per_sec": 0, 00:12:01.146 "w_mbytes_per_sec": 0 00:12:01.146 }, 00:12:01.146 "claimed": true, 00:12:01.146 "claim_type": "exclusive_write", 00:12:01.146 "zoned": false, 00:12:01.146 "supported_io_types": { 00:12:01.146 "read": true, 00:12:01.146 "write": true, 00:12:01.146 "unmap": true, 00:12:01.146 "flush": true, 00:12:01.146 "reset": true, 00:12:01.146 "nvme_admin": false, 00:12:01.146 "nvme_io": false, 00:12:01.146 "nvme_io_md": false, 00:12:01.146 "write_zeroes": true, 00:12:01.146 "zcopy": true, 00:12:01.146 "get_zone_info": false, 00:12:01.146 "zone_management": false, 00:12:01.146 "zone_append": false, 00:12:01.146 "compare": false, 00:12:01.146 "compare_and_write": false, 00:12:01.146 "abort": true, 00:12:01.146 "seek_hole": false, 00:12:01.146 "seek_data": false, 00:12:01.146 "copy": true, 00:12:01.146 "nvme_iov_md": false 00:12:01.146 }, 00:12:01.146 "memory_domains": [ 00:12:01.146 { 00:12:01.146 "dma_device_id": "system", 00:12:01.146 "dma_device_type": 1 00:12:01.146 }, 00:12:01.146 { 00:12:01.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:01.146 "dma_device_type": 2 00:12:01.146 } 00:12:01.146 ], 00:12:01.146 "driver_specific": {} 00:12:01.146 } 00:12:01.146 ] 00:12:01.146 12:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.146 12:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:01.146 12:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:01.146 12:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:01.146 12:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:01.146 12:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:01.146 12:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:01.146 12:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:01.146 12:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.146 12:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.146 12:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.146 12:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.146 12:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.146 12:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:01.146 12:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.146 12:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.146 12:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.146 12:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.146 "name": "Existed_Raid", 00:12:01.146 "uuid": "ce11a0d8-253d-4849-8e57-65600a8d1f87", 00:12:01.146 "strip_size_kb": 0, 00:12:01.146 "state": "online", 00:12:01.146 "raid_level": "raid1", 00:12:01.146 "superblock": true, 00:12:01.146 "num_base_bdevs": 4, 00:12:01.146 "num_base_bdevs_discovered": 4, 00:12:01.146 "num_base_bdevs_operational": 4, 00:12:01.146 "base_bdevs_list": [ 00:12:01.146 { 00:12:01.146 "name": "NewBaseBdev", 00:12:01.146 "uuid": "7749a435-e79f-4238-b3f1-70b8a1216fad", 00:12:01.146 "is_configured": true, 00:12:01.146 "data_offset": 2048, 00:12:01.146 "data_size": 63488 00:12:01.146 }, 00:12:01.146 { 00:12:01.146 "name": "BaseBdev2", 00:12:01.146 "uuid": "bf066766-1053-4c0d-b3ec-9d10a758d753", 00:12:01.146 "is_configured": true, 00:12:01.146 "data_offset": 2048, 00:12:01.146 "data_size": 63488 00:12:01.146 }, 00:12:01.146 { 00:12:01.146 "name": "BaseBdev3", 00:12:01.146 "uuid": "91e7449d-4d98-4eb7-8ff2-909aca1e7940", 00:12:01.146 "is_configured": true, 00:12:01.146 "data_offset": 2048, 00:12:01.146 "data_size": 63488 00:12:01.146 }, 00:12:01.146 { 00:12:01.147 "name": "BaseBdev4", 00:12:01.147 "uuid": "1181704b-0632-46be-addd-9f9d6a318690", 00:12:01.147 "is_configured": true, 00:12:01.147 "data_offset": 2048, 00:12:01.147 "data_size": 63488 00:12:01.147 } 00:12:01.147 ] 00:12:01.147 }' 00:12:01.147 12:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.147 12:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.717 12:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:01.717 12:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:01.717 12:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:01.717 12:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:01.717 12:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:01.717 12:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:01.717 12:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:01.717 12:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:01.717 12:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.717 12:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.717 [2024-11-19 12:04:04.871449] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:01.717 12:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.717 12:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:01.717 "name": "Existed_Raid", 00:12:01.717 "aliases": [ 00:12:01.717 "ce11a0d8-253d-4849-8e57-65600a8d1f87" 00:12:01.717 ], 00:12:01.717 "product_name": "Raid Volume", 00:12:01.717 "block_size": 512, 00:12:01.717 "num_blocks": 63488, 00:12:01.717 "uuid": "ce11a0d8-253d-4849-8e57-65600a8d1f87", 00:12:01.717 "assigned_rate_limits": { 00:12:01.717 "rw_ios_per_sec": 0, 00:12:01.717 "rw_mbytes_per_sec": 0, 00:12:01.717 "r_mbytes_per_sec": 0, 00:12:01.717 "w_mbytes_per_sec": 0 00:12:01.717 }, 00:12:01.717 "claimed": false, 00:12:01.717 "zoned": false, 00:12:01.717 "supported_io_types": { 00:12:01.717 "read": true, 00:12:01.717 "write": true, 00:12:01.717 "unmap": false, 00:12:01.717 "flush": false, 00:12:01.717 "reset": true, 00:12:01.717 "nvme_admin": false, 00:12:01.717 "nvme_io": false, 00:12:01.717 "nvme_io_md": false, 00:12:01.717 "write_zeroes": true, 00:12:01.717 "zcopy": false, 00:12:01.717 "get_zone_info": false, 00:12:01.717 "zone_management": false, 00:12:01.717 "zone_append": false, 00:12:01.717 "compare": false, 00:12:01.717 "compare_and_write": false, 00:12:01.717 "abort": false, 00:12:01.717 "seek_hole": false, 00:12:01.717 "seek_data": false, 00:12:01.717 "copy": false, 00:12:01.717 "nvme_iov_md": false 00:12:01.717 }, 00:12:01.717 "memory_domains": [ 00:12:01.717 { 00:12:01.717 "dma_device_id": "system", 00:12:01.717 "dma_device_type": 1 00:12:01.717 }, 00:12:01.717 { 00:12:01.717 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:01.717 "dma_device_type": 2 00:12:01.717 }, 00:12:01.717 { 00:12:01.717 "dma_device_id": "system", 00:12:01.717 "dma_device_type": 1 00:12:01.717 }, 00:12:01.717 { 00:12:01.717 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:01.717 "dma_device_type": 2 00:12:01.717 }, 00:12:01.717 { 00:12:01.717 "dma_device_id": "system", 00:12:01.717 "dma_device_type": 1 00:12:01.717 }, 00:12:01.717 { 00:12:01.717 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:01.717 "dma_device_type": 2 00:12:01.717 }, 00:12:01.717 { 00:12:01.717 "dma_device_id": "system", 00:12:01.717 "dma_device_type": 1 00:12:01.717 }, 00:12:01.717 { 00:12:01.717 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:01.717 "dma_device_type": 2 00:12:01.717 } 00:12:01.717 ], 00:12:01.717 "driver_specific": { 00:12:01.717 "raid": { 00:12:01.717 "uuid": "ce11a0d8-253d-4849-8e57-65600a8d1f87", 00:12:01.717 "strip_size_kb": 0, 00:12:01.717 "state": "online", 00:12:01.717 "raid_level": "raid1", 00:12:01.717 "superblock": true, 00:12:01.717 "num_base_bdevs": 4, 00:12:01.717 "num_base_bdevs_discovered": 4, 00:12:01.717 "num_base_bdevs_operational": 4, 00:12:01.717 "base_bdevs_list": [ 00:12:01.717 { 00:12:01.717 "name": "NewBaseBdev", 00:12:01.717 "uuid": "7749a435-e79f-4238-b3f1-70b8a1216fad", 00:12:01.717 "is_configured": true, 00:12:01.717 "data_offset": 2048, 00:12:01.717 "data_size": 63488 00:12:01.717 }, 00:12:01.717 { 00:12:01.717 "name": "BaseBdev2", 00:12:01.717 "uuid": "bf066766-1053-4c0d-b3ec-9d10a758d753", 00:12:01.717 "is_configured": true, 00:12:01.717 "data_offset": 2048, 00:12:01.717 "data_size": 63488 00:12:01.717 }, 00:12:01.717 { 00:12:01.717 "name": "BaseBdev3", 00:12:01.717 "uuid": "91e7449d-4d98-4eb7-8ff2-909aca1e7940", 00:12:01.717 "is_configured": true, 00:12:01.717 "data_offset": 2048, 00:12:01.717 "data_size": 63488 00:12:01.717 }, 00:12:01.717 { 00:12:01.717 "name": "BaseBdev4", 00:12:01.717 "uuid": "1181704b-0632-46be-addd-9f9d6a318690", 00:12:01.717 "is_configured": true, 00:12:01.717 "data_offset": 2048, 00:12:01.717 "data_size": 63488 00:12:01.717 } 00:12:01.717 ] 00:12:01.717 } 00:12:01.717 } 00:12:01.717 }' 00:12:01.717 12:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:01.717 12:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:01.717 BaseBdev2 00:12:01.717 BaseBdev3 00:12:01.717 BaseBdev4' 00:12:01.717 12:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:01.717 12:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:01.717 12:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:01.717 12:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:01.717 12:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:01.717 12:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.717 12:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.717 12:04:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.717 12:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:01.717 12:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:01.717 12:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:01.717 12:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:01.717 12:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:01.717 12:04:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.717 12:04:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.717 12:04:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.977 12:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:01.977 12:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:01.977 12:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:01.977 12:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:01.977 12:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:01.977 12:04:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.977 12:04:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.977 12:04:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.977 12:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:01.977 12:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:01.977 12:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:01.977 12:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:01.977 12:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:01.977 12:04:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.977 12:04:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.977 12:04:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.977 12:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:01.977 12:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:01.977 12:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:01.977 12:04:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.977 12:04:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.977 [2024-11-19 12:04:05.186487] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:01.977 [2024-11-19 12:04:05.186516] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:01.977 [2024-11-19 12:04:05.186610] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:01.977 [2024-11-19 12:04:05.186910] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:01.977 [2024-11-19 12:04:05.186924] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:01.977 12:04:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.977 12:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73861 00:12:01.977 12:04:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 73861 ']' 00:12:01.977 12:04:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 73861 00:12:01.978 12:04:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:01.978 12:04:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:01.978 12:04:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73861 00:12:01.978 killing process with pid 73861 00:12:01.978 12:04:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:01.978 12:04:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:01.978 12:04:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73861' 00:12:01.978 12:04:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 73861 00:12:01.978 [2024-11-19 12:04:05.235631] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:01.978 12:04:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 73861 00:12:02.546 [2024-11-19 12:04:05.641322] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:03.486 12:04:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:03.486 00:12:03.486 real 0m11.632s 00:12:03.486 user 0m18.432s 00:12:03.486 sys 0m2.122s 00:12:03.486 12:04:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:03.486 12:04:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.486 ************************************ 00:12:03.486 END TEST raid_state_function_test_sb 00:12:03.486 ************************************ 00:12:03.486 12:04:06 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:12:03.486 12:04:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:03.486 12:04:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:03.486 12:04:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:03.486 ************************************ 00:12:03.486 START TEST raid_superblock_test 00:12:03.486 ************************************ 00:12:03.486 12:04:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:12:03.486 12:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:12:03.486 12:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:12:03.486 12:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:03.486 12:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:03.486 12:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:03.486 12:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:03.486 12:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:03.486 12:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:03.486 12:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:03.486 12:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:03.486 12:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:03.486 12:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:03.486 12:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:03.486 12:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:12:03.486 12:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:12:03.486 12:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74532 00:12:03.486 12:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:03.486 12:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74532 00:12:03.486 12:04:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 74532 ']' 00:12:03.486 12:04:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:03.486 12:04:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:03.486 12:04:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:03.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:03.486 12:04:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:03.486 12:04:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.745 [2024-11-19 12:04:06.932433] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:12:03.745 [2024-11-19 12:04:06.932635] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74532 ] 00:12:03.745 [2024-11-19 12:04:07.107260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:04.004 [2024-11-19 12:04:07.230727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.265 [2024-11-19 12:04:07.428363] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:04.265 [2024-11-19 12:04:07.428509] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:04.526 12:04:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:04.526 12:04:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:12:04.526 12:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:04.526 12:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:04.526 12:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:04.526 12:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:04.526 12:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:04.526 12:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:04.526 12:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:04.526 12:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:04.526 12:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:04.526 12:04:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.526 12:04:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.526 malloc1 00:12:04.526 12:04:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.526 12:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:04.526 12:04:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.526 12:04:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.526 [2024-11-19 12:04:07.810325] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:04.526 [2024-11-19 12:04:07.810427] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:04.526 [2024-11-19 12:04:07.810470] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:04.526 [2024-11-19 12:04:07.810504] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:04.526 [2024-11-19 12:04:07.812624] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:04.526 [2024-11-19 12:04:07.812695] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:04.526 pt1 00:12:04.526 12:04:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.526 12:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:04.526 12:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:04.526 12:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:04.526 12:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:04.526 12:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:04.526 12:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:04.526 12:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:04.526 12:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:04.526 12:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:04.526 12:04:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.526 12:04:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.526 malloc2 00:12:04.526 12:04:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.526 12:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:04.526 12:04:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.526 12:04:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.526 [2024-11-19 12:04:07.869233] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:04.526 [2024-11-19 12:04:07.869291] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:04.526 [2024-11-19 12:04:07.869314] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:04.526 [2024-11-19 12:04:07.869325] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:04.526 [2024-11-19 12:04:07.871585] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:04.526 [2024-11-19 12:04:07.871670] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:04.526 pt2 00:12:04.526 12:04:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.526 12:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:04.526 12:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:04.526 12:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:04.526 12:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:04.526 12:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:04.526 12:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:04.526 12:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:04.526 12:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:04.526 12:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:04.526 12:04:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.526 12:04:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.787 malloc3 00:12:04.787 12:04:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.787 12:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:04.787 12:04:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.787 12:04:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.787 [2024-11-19 12:04:07.936756] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:04.787 [2024-11-19 12:04:07.936874] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:04.787 [2024-11-19 12:04:07.936952] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:04.787 [2024-11-19 12:04:07.936991] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:04.787 [2024-11-19 12:04:07.939415] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:04.787 [2024-11-19 12:04:07.939487] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:04.787 pt3 00:12:04.787 12:04:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.787 12:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:04.787 12:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:04.787 12:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:12:04.787 12:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:12:04.787 12:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:12:04.787 12:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:04.787 12:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:04.787 12:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:04.787 12:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:12:04.787 12:04:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.787 12:04:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.787 malloc4 00:12:04.787 12:04:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.787 12:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:04.787 12:04:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.787 12:04:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.787 [2024-11-19 12:04:07.996972] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:04.787 [2024-11-19 12:04:07.997073] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:04.787 [2024-11-19 12:04:07.997109] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:04.787 [2024-11-19 12:04:07.997159] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:04.787 [2024-11-19 12:04:07.999265] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:04.787 [2024-11-19 12:04:07.999336] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:04.787 pt4 00:12:04.787 12:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.787 12:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:04.787 12:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:04.787 12:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:12:04.787 12:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.787 12:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.787 [2024-11-19 12:04:08.009023] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:04.787 [2024-11-19 12:04:08.010825] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:04.787 [2024-11-19 12:04:08.010924] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:04.787 [2024-11-19 12:04:08.011001] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:04.787 [2024-11-19 12:04:08.011260] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:04.787 [2024-11-19 12:04:08.011314] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:04.787 [2024-11-19 12:04:08.011604] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:04.787 [2024-11-19 12:04:08.011796] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:04.787 [2024-11-19 12:04:08.011813] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:04.787 [2024-11-19 12:04:08.011968] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:04.787 12:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.787 12:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:04.787 12:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:04.787 12:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:04.787 12:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:04.787 12:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:04.787 12:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:04.787 12:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.787 12:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.787 12:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.787 12:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.787 12:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.787 12:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:04.787 12:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.787 12:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.787 12:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.787 12:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.787 "name": "raid_bdev1", 00:12:04.787 "uuid": "3c974975-0510-4a75-bf01-75ea5fc28ccf", 00:12:04.787 "strip_size_kb": 0, 00:12:04.787 "state": "online", 00:12:04.787 "raid_level": "raid1", 00:12:04.787 "superblock": true, 00:12:04.787 "num_base_bdevs": 4, 00:12:04.787 "num_base_bdevs_discovered": 4, 00:12:04.787 "num_base_bdevs_operational": 4, 00:12:04.787 "base_bdevs_list": [ 00:12:04.787 { 00:12:04.787 "name": "pt1", 00:12:04.787 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:04.787 "is_configured": true, 00:12:04.787 "data_offset": 2048, 00:12:04.787 "data_size": 63488 00:12:04.787 }, 00:12:04.787 { 00:12:04.787 "name": "pt2", 00:12:04.787 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:04.787 "is_configured": true, 00:12:04.787 "data_offset": 2048, 00:12:04.787 "data_size": 63488 00:12:04.787 }, 00:12:04.787 { 00:12:04.787 "name": "pt3", 00:12:04.787 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:04.787 "is_configured": true, 00:12:04.787 "data_offset": 2048, 00:12:04.787 "data_size": 63488 00:12:04.787 }, 00:12:04.787 { 00:12:04.787 "name": "pt4", 00:12:04.787 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:04.787 "is_configured": true, 00:12:04.787 "data_offset": 2048, 00:12:04.787 "data_size": 63488 00:12:04.787 } 00:12:04.787 ] 00:12:04.787 }' 00:12:04.787 12:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.787 12:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.358 12:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:05.358 12:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:05.358 12:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:05.358 12:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:05.358 12:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:05.358 12:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:05.358 12:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:05.358 12:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.358 12:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.358 12:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:05.358 [2024-11-19 12:04:08.488571] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:05.358 12:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.358 12:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:05.358 "name": "raid_bdev1", 00:12:05.358 "aliases": [ 00:12:05.358 "3c974975-0510-4a75-bf01-75ea5fc28ccf" 00:12:05.358 ], 00:12:05.358 "product_name": "Raid Volume", 00:12:05.358 "block_size": 512, 00:12:05.358 "num_blocks": 63488, 00:12:05.358 "uuid": "3c974975-0510-4a75-bf01-75ea5fc28ccf", 00:12:05.358 "assigned_rate_limits": { 00:12:05.358 "rw_ios_per_sec": 0, 00:12:05.358 "rw_mbytes_per_sec": 0, 00:12:05.358 "r_mbytes_per_sec": 0, 00:12:05.358 "w_mbytes_per_sec": 0 00:12:05.358 }, 00:12:05.358 "claimed": false, 00:12:05.358 "zoned": false, 00:12:05.358 "supported_io_types": { 00:12:05.358 "read": true, 00:12:05.358 "write": true, 00:12:05.358 "unmap": false, 00:12:05.358 "flush": false, 00:12:05.358 "reset": true, 00:12:05.358 "nvme_admin": false, 00:12:05.358 "nvme_io": false, 00:12:05.358 "nvme_io_md": false, 00:12:05.358 "write_zeroes": true, 00:12:05.358 "zcopy": false, 00:12:05.358 "get_zone_info": false, 00:12:05.358 "zone_management": false, 00:12:05.358 "zone_append": false, 00:12:05.358 "compare": false, 00:12:05.358 "compare_and_write": false, 00:12:05.358 "abort": false, 00:12:05.358 "seek_hole": false, 00:12:05.358 "seek_data": false, 00:12:05.358 "copy": false, 00:12:05.358 "nvme_iov_md": false 00:12:05.358 }, 00:12:05.358 "memory_domains": [ 00:12:05.358 { 00:12:05.358 "dma_device_id": "system", 00:12:05.358 "dma_device_type": 1 00:12:05.358 }, 00:12:05.358 { 00:12:05.358 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.358 "dma_device_type": 2 00:12:05.358 }, 00:12:05.358 { 00:12:05.358 "dma_device_id": "system", 00:12:05.358 "dma_device_type": 1 00:12:05.358 }, 00:12:05.358 { 00:12:05.358 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.358 "dma_device_type": 2 00:12:05.358 }, 00:12:05.358 { 00:12:05.358 "dma_device_id": "system", 00:12:05.358 "dma_device_type": 1 00:12:05.358 }, 00:12:05.358 { 00:12:05.358 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.358 "dma_device_type": 2 00:12:05.358 }, 00:12:05.358 { 00:12:05.358 "dma_device_id": "system", 00:12:05.358 "dma_device_type": 1 00:12:05.358 }, 00:12:05.358 { 00:12:05.358 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.358 "dma_device_type": 2 00:12:05.358 } 00:12:05.358 ], 00:12:05.358 "driver_specific": { 00:12:05.358 "raid": { 00:12:05.358 "uuid": "3c974975-0510-4a75-bf01-75ea5fc28ccf", 00:12:05.358 "strip_size_kb": 0, 00:12:05.358 "state": "online", 00:12:05.358 "raid_level": "raid1", 00:12:05.358 "superblock": true, 00:12:05.358 "num_base_bdevs": 4, 00:12:05.358 "num_base_bdevs_discovered": 4, 00:12:05.358 "num_base_bdevs_operational": 4, 00:12:05.358 "base_bdevs_list": [ 00:12:05.358 { 00:12:05.358 "name": "pt1", 00:12:05.358 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:05.358 "is_configured": true, 00:12:05.358 "data_offset": 2048, 00:12:05.358 "data_size": 63488 00:12:05.358 }, 00:12:05.358 { 00:12:05.358 "name": "pt2", 00:12:05.358 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:05.358 "is_configured": true, 00:12:05.358 "data_offset": 2048, 00:12:05.358 "data_size": 63488 00:12:05.358 }, 00:12:05.358 { 00:12:05.358 "name": "pt3", 00:12:05.358 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:05.358 "is_configured": true, 00:12:05.358 "data_offset": 2048, 00:12:05.358 "data_size": 63488 00:12:05.358 }, 00:12:05.358 { 00:12:05.358 "name": "pt4", 00:12:05.358 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:05.358 "is_configured": true, 00:12:05.358 "data_offset": 2048, 00:12:05.358 "data_size": 63488 00:12:05.358 } 00:12:05.358 ] 00:12:05.358 } 00:12:05.358 } 00:12:05.358 }' 00:12:05.358 12:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:05.358 12:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:05.358 pt2 00:12:05.358 pt3 00:12:05.358 pt4' 00:12:05.358 12:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.358 12:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:05.358 12:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:05.358 12:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.358 12:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:05.358 12:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.358 12:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.358 12:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.358 12:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:05.358 12:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:05.358 12:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:05.358 12:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.359 12:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:05.359 12:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.359 12:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.359 12:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.359 12:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:05.359 12:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:05.359 12:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:05.359 12:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:05.359 12:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.359 12:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.359 12:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.619 12:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.619 12:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:05.619 12:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:05.619 12:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:05.619 12:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:05.619 12:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.619 12:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.619 12:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.619 12:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.619 12:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:05.619 12:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:05.619 12:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:05.619 12:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:05.619 12:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.619 12:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.619 [2024-11-19 12:04:08.823994] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:05.619 12:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.619 12:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=3c974975-0510-4a75-bf01-75ea5fc28ccf 00:12:05.619 12:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 3c974975-0510-4a75-bf01-75ea5fc28ccf ']' 00:12:05.619 12:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:05.619 12:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.619 12:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.619 [2024-11-19 12:04:08.867568] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:05.619 [2024-11-19 12:04:08.867645] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:05.619 [2024-11-19 12:04:08.867791] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:05.619 [2024-11-19 12:04:08.867922] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:05.619 [2024-11-19 12:04:08.867979] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:05.619 12:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.619 12:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.619 12:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.619 12:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.619 12:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:05.619 12:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.619 12:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:05.619 12:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:05.619 12:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:05.619 12:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:05.619 12:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.619 12:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.619 12:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.619 12:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:05.619 12:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:05.619 12:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.619 12:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.619 12:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.619 12:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:05.619 12:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:05.619 12:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.619 12:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.619 12:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.619 12:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:05.619 12:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:12:05.619 12:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.619 12:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.619 12:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.619 12:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:05.619 12:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.619 12:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.619 12:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:05.880 12:04:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.880 12:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:05.880 12:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:05.880 12:04:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:12:05.880 12:04:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:05.880 12:04:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:05.880 12:04:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:05.880 12:04:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:05.880 12:04:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:05.880 12:04:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:05.880 12:04:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.880 12:04:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.880 [2024-11-19 12:04:09.027290] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:05.880 [2024-11-19 12:04:09.029348] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:05.880 [2024-11-19 12:04:09.029439] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:05.880 [2024-11-19 12:04:09.029525] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:12:05.880 [2024-11-19 12:04:09.029610] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:05.880 [2024-11-19 12:04:09.029711] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:05.880 [2024-11-19 12:04:09.029770] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:05.880 [2024-11-19 12:04:09.029827] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:12:05.880 [2024-11-19 12:04:09.029875] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:05.880 [2024-11-19 12:04:09.029912] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:05.880 request: 00:12:05.880 { 00:12:05.880 "name": "raid_bdev1", 00:12:05.880 "raid_level": "raid1", 00:12:05.880 "base_bdevs": [ 00:12:05.880 "malloc1", 00:12:05.880 "malloc2", 00:12:05.880 "malloc3", 00:12:05.880 "malloc4" 00:12:05.880 ], 00:12:05.880 "superblock": false, 00:12:05.880 "method": "bdev_raid_create", 00:12:05.880 "req_id": 1 00:12:05.880 } 00:12:05.880 Got JSON-RPC error response 00:12:05.880 response: 00:12:05.880 { 00:12:05.880 "code": -17, 00:12:05.880 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:05.880 } 00:12:05.880 12:04:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:05.880 12:04:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:12:05.880 12:04:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:05.880 12:04:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:05.880 12:04:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:05.880 12:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.880 12:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:05.880 12:04:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.880 12:04:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.880 12:04:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.880 12:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:05.880 12:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:05.880 12:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:05.880 12:04:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.880 12:04:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.880 [2024-11-19 12:04:09.095189] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:05.880 [2024-11-19 12:04:09.095303] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:05.880 [2024-11-19 12:04:09.095357] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:05.880 [2024-11-19 12:04:09.095421] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:05.880 [2024-11-19 12:04:09.097684] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:05.880 [2024-11-19 12:04:09.097776] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:05.880 [2024-11-19 12:04:09.097915] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:05.880 [2024-11-19 12:04:09.098050] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:05.880 pt1 00:12:05.880 12:04:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.880 12:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:12:05.880 12:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:05.880 12:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:05.880 12:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:05.880 12:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:05.880 12:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:05.880 12:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.880 12:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.880 12:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.880 12:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.880 12:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:05.880 12:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.880 12:04:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.880 12:04:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.880 12:04:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.880 12:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.880 "name": "raid_bdev1", 00:12:05.880 "uuid": "3c974975-0510-4a75-bf01-75ea5fc28ccf", 00:12:05.880 "strip_size_kb": 0, 00:12:05.880 "state": "configuring", 00:12:05.880 "raid_level": "raid1", 00:12:05.880 "superblock": true, 00:12:05.880 "num_base_bdevs": 4, 00:12:05.880 "num_base_bdevs_discovered": 1, 00:12:05.880 "num_base_bdevs_operational": 4, 00:12:05.880 "base_bdevs_list": [ 00:12:05.880 { 00:12:05.880 "name": "pt1", 00:12:05.880 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:05.880 "is_configured": true, 00:12:05.880 "data_offset": 2048, 00:12:05.880 "data_size": 63488 00:12:05.880 }, 00:12:05.880 { 00:12:05.880 "name": null, 00:12:05.880 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:05.880 "is_configured": false, 00:12:05.880 "data_offset": 2048, 00:12:05.880 "data_size": 63488 00:12:05.880 }, 00:12:05.880 { 00:12:05.880 "name": null, 00:12:05.880 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:05.880 "is_configured": false, 00:12:05.880 "data_offset": 2048, 00:12:05.880 "data_size": 63488 00:12:05.880 }, 00:12:05.880 { 00:12:05.880 "name": null, 00:12:05.880 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:05.880 "is_configured": false, 00:12:05.880 "data_offset": 2048, 00:12:05.880 "data_size": 63488 00:12:05.880 } 00:12:05.880 ] 00:12:05.880 }' 00:12:05.880 12:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.880 12:04:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.449 12:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:12:06.449 12:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:06.449 12:04:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.449 12:04:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.449 [2024-11-19 12:04:09.554478] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:06.449 [2024-11-19 12:04:09.554596] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:06.449 [2024-11-19 12:04:09.554632] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:06.449 [2024-11-19 12:04:09.554724] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:06.449 [2024-11-19 12:04:09.555247] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:06.449 [2024-11-19 12:04:09.555317] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:06.449 [2024-11-19 12:04:09.555439] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:06.449 [2024-11-19 12:04:09.555507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:06.449 pt2 00:12:06.449 12:04:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.449 12:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:06.449 12:04:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.449 12:04:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.449 [2024-11-19 12:04:09.566455] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:06.449 12:04:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.449 12:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:12:06.449 12:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:06.449 12:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:06.449 12:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:06.449 12:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:06.449 12:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:06.449 12:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.449 12:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.449 12:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.449 12:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.449 12:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.449 12:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:06.449 12:04:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.449 12:04:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.449 12:04:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.449 12:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.449 "name": "raid_bdev1", 00:12:06.449 "uuid": "3c974975-0510-4a75-bf01-75ea5fc28ccf", 00:12:06.449 "strip_size_kb": 0, 00:12:06.449 "state": "configuring", 00:12:06.449 "raid_level": "raid1", 00:12:06.449 "superblock": true, 00:12:06.449 "num_base_bdevs": 4, 00:12:06.449 "num_base_bdevs_discovered": 1, 00:12:06.449 "num_base_bdevs_operational": 4, 00:12:06.449 "base_bdevs_list": [ 00:12:06.449 { 00:12:06.449 "name": "pt1", 00:12:06.449 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:06.449 "is_configured": true, 00:12:06.449 "data_offset": 2048, 00:12:06.449 "data_size": 63488 00:12:06.449 }, 00:12:06.449 { 00:12:06.449 "name": null, 00:12:06.449 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:06.449 "is_configured": false, 00:12:06.449 "data_offset": 0, 00:12:06.449 "data_size": 63488 00:12:06.449 }, 00:12:06.449 { 00:12:06.449 "name": null, 00:12:06.449 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:06.449 "is_configured": false, 00:12:06.449 "data_offset": 2048, 00:12:06.449 "data_size": 63488 00:12:06.449 }, 00:12:06.449 { 00:12:06.449 "name": null, 00:12:06.449 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:06.449 "is_configured": false, 00:12:06.449 "data_offset": 2048, 00:12:06.449 "data_size": 63488 00:12:06.449 } 00:12:06.449 ] 00:12:06.449 }' 00:12:06.449 12:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.449 12:04:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.709 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:06.709 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:06.709 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:06.710 12:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.710 12:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.710 [2024-11-19 12:04:10.021650] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:06.710 [2024-11-19 12:04:10.021714] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:06.710 [2024-11-19 12:04:10.021741] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:06.710 [2024-11-19 12:04:10.021752] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:06.710 [2024-11-19 12:04:10.022219] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:06.710 [2024-11-19 12:04:10.022238] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:06.710 [2024-11-19 12:04:10.022323] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:06.710 [2024-11-19 12:04:10.022345] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:06.710 pt2 00:12:06.710 12:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.710 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:06.710 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:06.710 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:06.710 12:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.710 12:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.710 [2024-11-19 12:04:10.033615] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:06.710 [2024-11-19 12:04:10.033675] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:06.710 [2024-11-19 12:04:10.033708] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:06.710 [2024-11-19 12:04:10.033716] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:06.710 [2024-11-19 12:04:10.034095] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:06.710 [2024-11-19 12:04:10.034112] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:06.710 [2024-11-19 12:04:10.034198] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:06.710 [2024-11-19 12:04:10.034216] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:06.710 pt3 00:12:06.710 12:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.710 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:06.710 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:06.710 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:06.710 12:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.710 12:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.710 [2024-11-19 12:04:10.045551] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:06.710 [2024-11-19 12:04:10.045630] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:06.710 [2024-11-19 12:04:10.045648] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:06.710 [2024-11-19 12:04:10.045656] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:06.710 [2024-11-19 12:04:10.045989] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:06.710 [2024-11-19 12:04:10.046022] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:06.710 [2024-11-19 12:04:10.046097] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:06.710 [2024-11-19 12:04:10.046114] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:06.710 [2024-11-19 12:04:10.046250] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:06.710 [2024-11-19 12:04:10.046259] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:06.710 [2024-11-19 12:04:10.046493] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:06.710 [2024-11-19 12:04:10.046639] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:06.710 [2024-11-19 12:04:10.046652] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:06.710 [2024-11-19 12:04:10.046781] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:06.710 pt4 00:12:06.710 12:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.710 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:06.710 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:06.710 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:06.710 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:06.710 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:06.710 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:06.710 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:06.710 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:06.710 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.710 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.710 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.710 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.710 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.710 12:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.710 12:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.710 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:06.710 12:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.971 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.971 "name": "raid_bdev1", 00:12:06.971 "uuid": "3c974975-0510-4a75-bf01-75ea5fc28ccf", 00:12:06.971 "strip_size_kb": 0, 00:12:06.971 "state": "online", 00:12:06.971 "raid_level": "raid1", 00:12:06.971 "superblock": true, 00:12:06.971 "num_base_bdevs": 4, 00:12:06.971 "num_base_bdevs_discovered": 4, 00:12:06.971 "num_base_bdevs_operational": 4, 00:12:06.971 "base_bdevs_list": [ 00:12:06.971 { 00:12:06.971 "name": "pt1", 00:12:06.971 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:06.971 "is_configured": true, 00:12:06.971 "data_offset": 2048, 00:12:06.971 "data_size": 63488 00:12:06.971 }, 00:12:06.971 { 00:12:06.971 "name": "pt2", 00:12:06.971 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:06.971 "is_configured": true, 00:12:06.971 "data_offset": 2048, 00:12:06.971 "data_size": 63488 00:12:06.971 }, 00:12:06.971 { 00:12:06.971 "name": "pt3", 00:12:06.971 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:06.971 "is_configured": true, 00:12:06.971 "data_offset": 2048, 00:12:06.971 "data_size": 63488 00:12:06.971 }, 00:12:06.971 { 00:12:06.971 "name": "pt4", 00:12:06.971 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:06.971 "is_configured": true, 00:12:06.971 "data_offset": 2048, 00:12:06.971 "data_size": 63488 00:12:06.971 } 00:12:06.971 ] 00:12:06.971 }' 00:12:06.971 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.971 12:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.231 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:07.231 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:07.231 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:07.231 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:07.231 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:07.231 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:07.231 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:07.231 12:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.231 12:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.231 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:07.231 [2024-11-19 12:04:10.493268] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:07.231 12:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.231 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:07.231 "name": "raid_bdev1", 00:12:07.231 "aliases": [ 00:12:07.231 "3c974975-0510-4a75-bf01-75ea5fc28ccf" 00:12:07.231 ], 00:12:07.231 "product_name": "Raid Volume", 00:12:07.231 "block_size": 512, 00:12:07.231 "num_blocks": 63488, 00:12:07.231 "uuid": "3c974975-0510-4a75-bf01-75ea5fc28ccf", 00:12:07.231 "assigned_rate_limits": { 00:12:07.231 "rw_ios_per_sec": 0, 00:12:07.231 "rw_mbytes_per_sec": 0, 00:12:07.231 "r_mbytes_per_sec": 0, 00:12:07.231 "w_mbytes_per_sec": 0 00:12:07.231 }, 00:12:07.231 "claimed": false, 00:12:07.231 "zoned": false, 00:12:07.231 "supported_io_types": { 00:12:07.231 "read": true, 00:12:07.231 "write": true, 00:12:07.231 "unmap": false, 00:12:07.231 "flush": false, 00:12:07.231 "reset": true, 00:12:07.231 "nvme_admin": false, 00:12:07.231 "nvme_io": false, 00:12:07.231 "nvme_io_md": false, 00:12:07.231 "write_zeroes": true, 00:12:07.231 "zcopy": false, 00:12:07.231 "get_zone_info": false, 00:12:07.231 "zone_management": false, 00:12:07.232 "zone_append": false, 00:12:07.232 "compare": false, 00:12:07.232 "compare_and_write": false, 00:12:07.232 "abort": false, 00:12:07.232 "seek_hole": false, 00:12:07.232 "seek_data": false, 00:12:07.232 "copy": false, 00:12:07.232 "nvme_iov_md": false 00:12:07.232 }, 00:12:07.232 "memory_domains": [ 00:12:07.232 { 00:12:07.232 "dma_device_id": "system", 00:12:07.232 "dma_device_type": 1 00:12:07.232 }, 00:12:07.232 { 00:12:07.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.232 "dma_device_type": 2 00:12:07.232 }, 00:12:07.232 { 00:12:07.232 "dma_device_id": "system", 00:12:07.232 "dma_device_type": 1 00:12:07.232 }, 00:12:07.232 { 00:12:07.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.232 "dma_device_type": 2 00:12:07.232 }, 00:12:07.232 { 00:12:07.232 "dma_device_id": "system", 00:12:07.232 "dma_device_type": 1 00:12:07.232 }, 00:12:07.232 { 00:12:07.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.232 "dma_device_type": 2 00:12:07.232 }, 00:12:07.232 { 00:12:07.232 "dma_device_id": "system", 00:12:07.232 "dma_device_type": 1 00:12:07.232 }, 00:12:07.232 { 00:12:07.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.232 "dma_device_type": 2 00:12:07.232 } 00:12:07.232 ], 00:12:07.232 "driver_specific": { 00:12:07.232 "raid": { 00:12:07.232 "uuid": "3c974975-0510-4a75-bf01-75ea5fc28ccf", 00:12:07.232 "strip_size_kb": 0, 00:12:07.232 "state": "online", 00:12:07.232 "raid_level": "raid1", 00:12:07.232 "superblock": true, 00:12:07.232 "num_base_bdevs": 4, 00:12:07.232 "num_base_bdevs_discovered": 4, 00:12:07.232 "num_base_bdevs_operational": 4, 00:12:07.232 "base_bdevs_list": [ 00:12:07.232 { 00:12:07.232 "name": "pt1", 00:12:07.232 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:07.232 "is_configured": true, 00:12:07.232 "data_offset": 2048, 00:12:07.232 "data_size": 63488 00:12:07.232 }, 00:12:07.232 { 00:12:07.232 "name": "pt2", 00:12:07.232 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:07.232 "is_configured": true, 00:12:07.232 "data_offset": 2048, 00:12:07.232 "data_size": 63488 00:12:07.232 }, 00:12:07.232 { 00:12:07.232 "name": "pt3", 00:12:07.232 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:07.232 "is_configured": true, 00:12:07.232 "data_offset": 2048, 00:12:07.232 "data_size": 63488 00:12:07.232 }, 00:12:07.232 { 00:12:07.232 "name": "pt4", 00:12:07.232 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:07.232 "is_configured": true, 00:12:07.232 "data_offset": 2048, 00:12:07.232 "data_size": 63488 00:12:07.232 } 00:12:07.232 ] 00:12:07.232 } 00:12:07.232 } 00:12:07.232 }' 00:12:07.232 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:07.232 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:07.232 pt2 00:12:07.232 pt3 00:12:07.232 pt4' 00:12:07.232 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:07.494 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:07.494 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:07.494 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:07.494 12:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.494 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:07.494 12:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.494 12:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.494 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:07.494 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:07.494 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:07.494 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:07.494 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:07.494 12:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.494 12:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.494 12:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.494 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:07.494 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:07.494 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:07.494 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:07.494 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:07.494 12:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.494 12:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.494 12:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.494 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:07.494 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:07.494 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:07.494 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:07.494 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:07.494 12:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.494 12:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.494 12:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.494 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:07.494 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:07.494 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:07.494 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:07.494 12:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.494 12:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.494 [2024-11-19 12:04:10.784745] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:07.494 12:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.494 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 3c974975-0510-4a75-bf01-75ea5fc28ccf '!=' 3c974975-0510-4a75-bf01-75ea5fc28ccf ']' 00:12:07.495 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:12:07.495 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:07.495 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:07.495 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:12:07.495 12:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.495 12:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.495 [2024-11-19 12:04:10.832360] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:12:07.495 12:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.495 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:07.495 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:07.495 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:07.495 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:07.495 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:07.495 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:07.495 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.495 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.495 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.495 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.495 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.495 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:07.495 12:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.495 12:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.495 12:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.762 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.762 "name": "raid_bdev1", 00:12:07.762 "uuid": "3c974975-0510-4a75-bf01-75ea5fc28ccf", 00:12:07.762 "strip_size_kb": 0, 00:12:07.762 "state": "online", 00:12:07.762 "raid_level": "raid1", 00:12:07.762 "superblock": true, 00:12:07.762 "num_base_bdevs": 4, 00:12:07.762 "num_base_bdevs_discovered": 3, 00:12:07.762 "num_base_bdevs_operational": 3, 00:12:07.762 "base_bdevs_list": [ 00:12:07.762 { 00:12:07.762 "name": null, 00:12:07.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.762 "is_configured": false, 00:12:07.762 "data_offset": 0, 00:12:07.762 "data_size": 63488 00:12:07.763 }, 00:12:07.763 { 00:12:07.763 "name": "pt2", 00:12:07.763 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:07.763 "is_configured": true, 00:12:07.763 "data_offset": 2048, 00:12:07.763 "data_size": 63488 00:12:07.763 }, 00:12:07.763 { 00:12:07.763 "name": "pt3", 00:12:07.763 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:07.763 "is_configured": true, 00:12:07.763 "data_offset": 2048, 00:12:07.763 "data_size": 63488 00:12:07.763 }, 00:12:07.763 { 00:12:07.763 "name": "pt4", 00:12:07.763 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:07.763 "is_configured": true, 00:12:07.763 "data_offset": 2048, 00:12:07.763 "data_size": 63488 00:12:07.763 } 00:12:07.763 ] 00:12:07.763 }' 00:12:07.763 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.763 12:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.023 12:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:08.023 12:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.023 12:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.023 [2024-11-19 12:04:11.267577] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:08.023 [2024-11-19 12:04:11.267675] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:08.023 [2024-11-19 12:04:11.267813] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:08.023 [2024-11-19 12:04:11.267921] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:08.023 [2024-11-19 12:04:11.267990] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:08.023 12:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.023 12:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:12:08.023 12:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.023 12:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.023 12:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.023 12:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.023 12:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:12:08.023 12:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:12:08.023 12:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:12:08.023 12:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:08.023 12:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:12:08.023 12:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.023 12:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.023 12:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.023 12:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:08.023 12:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:08.024 12:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:12:08.024 12:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.024 12:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.024 12:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.024 12:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:08.024 12:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:08.024 12:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:12:08.024 12:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.024 12:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.024 12:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.024 12:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:08.024 12:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:08.024 12:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:12:08.024 12:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:08.024 12:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:08.024 12:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.024 12:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.024 [2024-11-19 12:04:11.351434] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:08.024 [2024-11-19 12:04:11.351488] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:08.024 [2024-11-19 12:04:11.351508] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:12:08.024 [2024-11-19 12:04:11.351517] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:08.024 [2024-11-19 12:04:11.353819] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:08.024 [2024-11-19 12:04:11.353895] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:08.024 [2024-11-19 12:04:11.353986] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:08.024 [2024-11-19 12:04:11.354043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:08.024 pt2 00:12:08.024 12:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.024 12:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:08.024 12:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:08.024 12:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:08.024 12:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:08.024 12:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:08.024 12:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:08.024 12:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.024 12:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.024 12:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.024 12:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.024 12:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.024 12:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:08.024 12:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.024 12:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.024 12:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.284 12:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.284 "name": "raid_bdev1", 00:12:08.284 "uuid": "3c974975-0510-4a75-bf01-75ea5fc28ccf", 00:12:08.284 "strip_size_kb": 0, 00:12:08.284 "state": "configuring", 00:12:08.284 "raid_level": "raid1", 00:12:08.284 "superblock": true, 00:12:08.284 "num_base_bdevs": 4, 00:12:08.284 "num_base_bdevs_discovered": 1, 00:12:08.284 "num_base_bdevs_operational": 3, 00:12:08.284 "base_bdevs_list": [ 00:12:08.284 { 00:12:08.284 "name": null, 00:12:08.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.284 "is_configured": false, 00:12:08.284 "data_offset": 2048, 00:12:08.284 "data_size": 63488 00:12:08.284 }, 00:12:08.284 { 00:12:08.284 "name": "pt2", 00:12:08.284 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:08.284 "is_configured": true, 00:12:08.284 "data_offset": 2048, 00:12:08.284 "data_size": 63488 00:12:08.284 }, 00:12:08.284 { 00:12:08.284 "name": null, 00:12:08.284 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:08.284 "is_configured": false, 00:12:08.284 "data_offset": 2048, 00:12:08.284 "data_size": 63488 00:12:08.284 }, 00:12:08.284 { 00:12:08.284 "name": null, 00:12:08.284 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:08.284 "is_configured": false, 00:12:08.284 "data_offset": 2048, 00:12:08.284 "data_size": 63488 00:12:08.284 } 00:12:08.284 ] 00:12:08.284 }' 00:12:08.284 12:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.284 12:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.544 12:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:08.544 12:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:08.544 12:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:08.544 12:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.544 12:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.544 [2024-11-19 12:04:11.826770] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:08.544 [2024-11-19 12:04:11.826903] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:08.544 [2024-11-19 12:04:11.826971] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:12:08.544 [2024-11-19 12:04:11.827018] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:08.544 [2024-11-19 12:04:11.827591] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:08.545 [2024-11-19 12:04:11.827658] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:08.545 [2024-11-19 12:04:11.827788] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:08.545 [2024-11-19 12:04:11.827841] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:08.545 pt3 00:12:08.545 12:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.545 12:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:08.545 12:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:08.545 12:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:08.545 12:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:08.545 12:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:08.545 12:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:08.545 12:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.545 12:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.545 12:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.545 12:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.545 12:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.545 12:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:08.545 12:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.545 12:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.545 12:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.545 12:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.545 "name": "raid_bdev1", 00:12:08.545 "uuid": "3c974975-0510-4a75-bf01-75ea5fc28ccf", 00:12:08.545 "strip_size_kb": 0, 00:12:08.545 "state": "configuring", 00:12:08.545 "raid_level": "raid1", 00:12:08.545 "superblock": true, 00:12:08.545 "num_base_bdevs": 4, 00:12:08.545 "num_base_bdevs_discovered": 2, 00:12:08.545 "num_base_bdevs_operational": 3, 00:12:08.545 "base_bdevs_list": [ 00:12:08.545 { 00:12:08.545 "name": null, 00:12:08.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.545 "is_configured": false, 00:12:08.545 "data_offset": 2048, 00:12:08.545 "data_size": 63488 00:12:08.545 }, 00:12:08.545 { 00:12:08.545 "name": "pt2", 00:12:08.545 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:08.545 "is_configured": true, 00:12:08.545 "data_offset": 2048, 00:12:08.545 "data_size": 63488 00:12:08.545 }, 00:12:08.545 { 00:12:08.545 "name": "pt3", 00:12:08.545 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:08.545 "is_configured": true, 00:12:08.545 "data_offset": 2048, 00:12:08.545 "data_size": 63488 00:12:08.545 }, 00:12:08.545 { 00:12:08.545 "name": null, 00:12:08.545 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:08.545 "is_configured": false, 00:12:08.545 "data_offset": 2048, 00:12:08.545 "data_size": 63488 00:12:08.545 } 00:12:08.545 ] 00:12:08.545 }' 00:12:08.545 12:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.545 12:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.115 12:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:09.115 12:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:09.115 12:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:12:09.115 12:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:09.115 12:04:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.115 12:04:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.115 [2024-11-19 12:04:12.278024] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:09.115 [2024-11-19 12:04:12.278093] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:09.115 [2024-11-19 12:04:12.278117] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:12:09.115 [2024-11-19 12:04:12.278128] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:09.115 [2024-11-19 12:04:12.278607] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:09.115 [2024-11-19 12:04:12.278626] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:09.115 [2024-11-19 12:04:12.278719] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:09.115 [2024-11-19 12:04:12.278750] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:09.115 [2024-11-19 12:04:12.278922] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:09.115 [2024-11-19 12:04:12.278932] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:09.115 [2024-11-19 12:04:12.279274] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:09.115 [2024-11-19 12:04:12.279459] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:09.115 [2024-11-19 12:04:12.279474] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:09.115 [2024-11-19 12:04:12.279652] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:09.115 pt4 00:12:09.115 12:04:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.115 12:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:09.115 12:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:09.115 12:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:09.115 12:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:09.115 12:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:09.115 12:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:09.115 12:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.115 12:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.115 12:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.115 12:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.115 12:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.115 12:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:09.115 12:04:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.115 12:04:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.115 12:04:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.115 12:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.115 "name": "raid_bdev1", 00:12:09.115 "uuid": "3c974975-0510-4a75-bf01-75ea5fc28ccf", 00:12:09.115 "strip_size_kb": 0, 00:12:09.115 "state": "online", 00:12:09.115 "raid_level": "raid1", 00:12:09.115 "superblock": true, 00:12:09.115 "num_base_bdevs": 4, 00:12:09.115 "num_base_bdevs_discovered": 3, 00:12:09.115 "num_base_bdevs_operational": 3, 00:12:09.115 "base_bdevs_list": [ 00:12:09.115 { 00:12:09.115 "name": null, 00:12:09.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.115 "is_configured": false, 00:12:09.115 "data_offset": 2048, 00:12:09.115 "data_size": 63488 00:12:09.115 }, 00:12:09.115 { 00:12:09.115 "name": "pt2", 00:12:09.115 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:09.115 "is_configured": true, 00:12:09.115 "data_offset": 2048, 00:12:09.115 "data_size": 63488 00:12:09.115 }, 00:12:09.115 { 00:12:09.115 "name": "pt3", 00:12:09.115 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:09.115 "is_configured": true, 00:12:09.115 "data_offset": 2048, 00:12:09.115 "data_size": 63488 00:12:09.115 }, 00:12:09.115 { 00:12:09.115 "name": "pt4", 00:12:09.115 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:09.115 "is_configured": true, 00:12:09.115 "data_offset": 2048, 00:12:09.115 "data_size": 63488 00:12:09.115 } 00:12:09.115 ] 00:12:09.115 }' 00:12:09.115 12:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.115 12:04:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.376 12:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:09.376 12:04:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.376 12:04:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.376 [2024-11-19 12:04:12.721184] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:09.376 [2024-11-19 12:04:12.721263] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:09.376 [2024-11-19 12:04:12.721364] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:09.376 [2024-11-19 12:04:12.721456] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:09.376 [2024-11-19 12:04:12.721506] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:09.376 12:04:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.376 12:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:12:09.376 12:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.376 12:04:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.376 12:04:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.376 12:04:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.636 12:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:12:09.636 12:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:12:09.636 12:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:12:09.636 12:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:12:09.636 12:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:12:09.636 12:04:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.636 12:04:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.636 12:04:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.636 12:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:09.636 12:04:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.636 12:04:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.636 [2024-11-19 12:04:12.781110] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:09.637 [2024-11-19 12:04:12.781240] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:09.637 [2024-11-19 12:04:12.781291] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:12:09.637 [2024-11-19 12:04:12.781330] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:09.637 [2024-11-19 12:04:12.783844] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:09.637 [2024-11-19 12:04:12.783890] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:09.637 [2024-11-19 12:04:12.783981] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:09.637 [2024-11-19 12:04:12.784064] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:09.637 [2024-11-19 12:04:12.784222] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:12:09.637 [2024-11-19 12:04:12.784236] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:09.637 [2024-11-19 12:04:12.784252] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:12:09.637 [2024-11-19 12:04:12.784320] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:09.637 [2024-11-19 12:04:12.784443] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:09.637 pt1 00:12:09.637 12:04:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.637 12:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:12:09.637 12:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:09.637 12:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:09.637 12:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:09.637 12:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:09.637 12:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:09.637 12:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:09.637 12:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.637 12:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.637 12:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.637 12:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.637 12:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.637 12:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:09.637 12:04:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.637 12:04:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.637 12:04:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.637 12:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.637 "name": "raid_bdev1", 00:12:09.637 "uuid": "3c974975-0510-4a75-bf01-75ea5fc28ccf", 00:12:09.637 "strip_size_kb": 0, 00:12:09.637 "state": "configuring", 00:12:09.637 "raid_level": "raid1", 00:12:09.637 "superblock": true, 00:12:09.637 "num_base_bdevs": 4, 00:12:09.637 "num_base_bdevs_discovered": 2, 00:12:09.637 "num_base_bdevs_operational": 3, 00:12:09.637 "base_bdevs_list": [ 00:12:09.637 { 00:12:09.637 "name": null, 00:12:09.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.637 "is_configured": false, 00:12:09.637 "data_offset": 2048, 00:12:09.637 "data_size": 63488 00:12:09.637 }, 00:12:09.637 { 00:12:09.637 "name": "pt2", 00:12:09.637 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:09.637 "is_configured": true, 00:12:09.637 "data_offset": 2048, 00:12:09.637 "data_size": 63488 00:12:09.637 }, 00:12:09.637 { 00:12:09.637 "name": "pt3", 00:12:09.637 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:09.637 "is_configured": true, 00:12:09.637 "data_offset": 2048, 00:12:09.637 "data_size": 63488 00:12:09.637 }, 00:12:09.637 { 00:12:09.637 "name": null, 00:12:09.637 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:09.637 "is_configured": false, 00:12:09.637 "data_offset": 2048, 00:12:09.637 "data_size": 63488 00:12:09.637 } 00:12:09.637 ] 00:12:09.637 }' 00:12:09.637 12:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.637 12:04:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.898 12:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:09.898 12:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:12:09.898 12:04:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.898 12:04:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.898 12:04:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.898 12:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:12:09.898 12:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:09.898 12:04:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.898 12:04:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.898 [2024-11-19 12:04:13.264250] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:09.898 [2024-11-19 12:04:13.264380] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:09.898 [2024-11-19 12:04:13.264419] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:12:09.898 [2024-11-19 12:04:13.264447] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:09.898 [2024-11-19 12:04:13.264907] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:09.898 [2024-11-19 12:04:13.264964] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:09.898 [2024-11-19 12:04:13.265085] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:09.898 [2024-11-19 12:04:13.265146] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:09.898 [2024-11-19 12:04:13.265327] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:12:09.898 [2024-11-19 12:04:13.265366] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:09.898 [2024-11-19 12:04:13.265626] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:09.898 [2024-11-19 12:04:13.265811] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:12:09.898 [2024-11-19 12:04:13.265853] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:12:09.898 [2024-11-19 12:04:13.266059] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:09.898 pt4 00:12:09.898 12:04:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.898 12:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:09.898 12:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:09.898 12:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:09.898 12:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:09.898 12:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:09.898 12:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:09.898 12:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.159 12:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.159 12:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.159 12:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.159 12:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.159 12:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:10.159 12:04:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.159 12:04:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.159 12:04:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.159 12:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.159 "name": "raid_bdev1", 00:12:10.159 "uuid": "3c974975-0510-4a75-bf01-75ea5fc28ccf", 00:12:10.159 "strip_size_kb": 0, 00:12:10.159 "state": "online", 00:12:10.159 "raid_level": "raid1", 00:12:10.159 "superblock": true, 00:12:10.159 "num_base_bdevs": 4, 00:12:10.159 "num_base_bdevs_discovered": 3, 00:12:10.159 "num_base_bdevs_operational": 3, 00:12:10.159 "base_bdevs_list": [ 00:12:10.159 { 00:12:10.159 "name": null, 00:12:10.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.159 "is_configured": false, 00:12:10.159 "data_offset": 2048, 00:12:10.159 "data_size": 63488 00:12:10.159 }, 00:12:10.159 { 00:12:10.159 "name": "pt2", 00:12:10.159 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:10.159 "is_configured": true, 00:12:10.159 "data_offset": 2048, 00:12:10.159 "data_size": 63488 00:12:10.159 }, 00:12:10.159 { 00:12:10.159 "name": "pt3", 00:12:10.159 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:10.159 "is_configured": true, 00:12:10.159 "data_offset": 2048, 00:12:10.159 "data_size": 63488 00:12:10.159 }, 00:12:10.159 { 00:12:10.159 "name": "pt4", 00:12:10.159 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:10.159 "is_configured": true, 00:12:10.159 "data_offset": 2048, 00:12:10.159 "data_size": 63488 00:12:10.159 } 00:12:10.159 ] 00:12:10.159 }' 00:12:10.159 12:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.159 12:04:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.419 12:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:12:10.419 12:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:10.419 12:04:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.419 12:04:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.419 12:04:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.419 12:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:12:10.419 12:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:10.419 12:04:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.419 12:04:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.419 12:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:12:10.419 [2024-11-19 12:04:13.759683] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:10.419 12:04:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.678 12:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 3c974975-0510-4a75-bf01-75ea5fc28ccf '!=' 3c974975-0510-4a75-bf01-75ea5fc28ccf ']' 00:12:10.678 12:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74532 00:12:10.678 12:04:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 74532 ']' 00:12:10.678 12:04:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 74532 00:12:10.678 12:04:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:12:10.678 12:04:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:10.678 12:04:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74532 00:12:10.678 killing process with pid 74532 00:12:10.678 12:04:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:10.678 12:04:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:10.678 12:04:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74532' 00:12:10.678 12:04:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 74532 00:12:10.678 [2024-11-19 12:04:13.844858] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:10.678 [2024-11-19 12:04:13.844966] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:10.678 [2024-11-19 12:04:13.845054] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:10.678 12:04:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 74532 00:12:10.678 [2024-11-19 12:04:13.845067] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:12:10.938 [2024-11-19 12:04:14.230759] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:12.320 12:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:12.320 00:12:12.320 real 0m8.461s 00:12:12.320 user 0m13.358s 00:12:12.320 sys 0m1.559s 00:12:12.320 12:04:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:12.320 ************************************ 00:12:12.320 END TEST raid_superblock_test 00:12:12.320 ************************************ 00:12:12.320 12:04:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.320 12:04:15 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:12:12.320 12:04:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:12.320 12:04:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:12.320 12:04:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:12.320 ************************************ 00:12:12.320 START TEST raid_read_error_test 00:12:12.320 ************************************ 00:12:12.320 12:04:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:12:12.320 12:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:12.320 12:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:12.320 12:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:12.320 12:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:12.320 12:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:12.320 12:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:12.320 12:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:12.320 12:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:12.320 12:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:12.320 12:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:12.320 12:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:12.320 12:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:12.321 12:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:12.321 12:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:12.321 12:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:12.321 12:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:12.321 12:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:12.321 12:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:12.321 12:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:12.321 12:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:12.321 12:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:12.321 12:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:12.321 12:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:12.321 12:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:12.321 12:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:12.321 12:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:12.321 12:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:12.321 12:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.fHtmDt72Md 00:12:12.321 12:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75019 00:12:12.321 12:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:12.321 12:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75019 00:12:12.321 12:04:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 75019 ']' 00:12:12.321 12:04:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:12.321 12:04:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:12.321 12:04:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:12.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:12.321 12:04:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:12.321 12:04:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.321 [2024-11-19 12:04:15.481401] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:12:12.321 [2024-11-19 12:04:15.481515] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75019 ] 00:12:12.321 [2024-11-19 12:04:15.658106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:12.581 [2024-11-19 12:04:15.781961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.840 [2024-11-19 12:04:15.992253] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:12.840 [2024-11-19 12:04:15.992359] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:13.100 12:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:13.100 12:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:13.100 12:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:13.100 12:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:13.100 12:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.100 12:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.100 BaseBdev1_malloc 00:12:13.100 12:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.100 12:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:13.100 12:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.100 12:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.100 true 00:12:13.100 12:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.100 12:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:13.100 12:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.100 12:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.100 [2024-11-19 12:04:16.384455] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:13.100 [2024-11-19 12:04:16.384508] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:13.100 [2024-11-19 12:04:16.384528] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:13.100 [2024-11-19 12:04:16.384538] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:13.100 [2024-11-19 12:04:16.386532] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:13.100 [2024-11-19 12:04:16.386570] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:13.100 BaseBdev1 00:12:13.100 12:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.100 12:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:13.100 12:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:13.100 12:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.100 12:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.100 BaseBdev2_malloc 00:12:13.100 12:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.100 12:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:13.100 12:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.100 12:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.100 true 00:12:13.100 12:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.100 12:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:13.100 12:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.100 12:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.100 [2024-11-19 12:04:16.450578] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:13.100 [2024-11-19 12:04:16.450629] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:13.100 [2024-11-19 12:04:16.450645] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:13.100 [2024-11-19 12:04:16.450654] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:13.100 [2024-11-19 12:04:16.452732] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:13.100 [2024-11-19 12:04:16.452817] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:13.100 BaseBdev2 00:12:13.100 12:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.100 12:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:13.100 12:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:13.100 12:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.100 12:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.360 BaseBdev3_malloc 00:12:13.360 12:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.360 12:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:13.360 12:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.360 12:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.360 true 00:12:13.360 12:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.360 12:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:13.360 12:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.360 12:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.360 [2024-11-19 12:04:16.529440] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:13.360 [2024-11-19 12:04:16.529536] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:13.360 [2024-11-19 12:04:16.529558] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:13.360 [2024-11-19 12:04:16.529568] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:13.360 [2024-11-19 12:04:16.531636] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:13.360 [2024-11-19 12:04:16.531673] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:13.360 BaseBdev3 00:12:13.360 12:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.360 12:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:13.360 12:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:13.360 12:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.360 12:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.360 BaseBdev4_malloc 00:12:13.360 12:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.360 12:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:13.360 12:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.360 12:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.360 true 00:12:13.361 12:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.361 12:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:13.361 12:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.361 12:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.361 [2024-11-19 12:04:16.594958] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:13.361 [2024-11-19 12:04:16.595021] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:13.361 [2024-11-19 12:04:16.595063] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:13.361 [2024-11-19 12:04:16.595074] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:13.361 [2024-11-19 12:04:16.597136] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:13.361 [2024-11-19 12:04:16.597172] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:13.361 BaseBdev4 00:12:13.361 12:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.361 12:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:13.361 12:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.361 12:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.361 [2024-11-19 12:04:16.607005] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:13.361 [2024-11-19 12:04:16.608795] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:13.361 [2024-11-19 12:04:16.608870] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:13.361 [2024-11-19 12:04:16.608931] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:13.361 [2024-11-19 12:04:16.609164] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:13.361 [2024-11-19 12:04:16.609179] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:13.361 [2024-11-19 12:04:16.609402] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:13.361 [2024-11-19 12:04:16.609581] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:13.361 [2024-11-19 12:04:16.609596] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:13.361 [2024-11-19 12:04:16.609744] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:13.361 12:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.361 12:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:13.361 12:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:13.361 12:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:13.361 12:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:13.361 12:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:13.361 12:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:13.361 12:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.361 12:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.361 12:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.361 12:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.361 12:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.361 12:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:13.361 12:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.361 12:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.361 12:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.361 12:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.361 "name": "raid_bdev1", 00:12:13.361 "uuid": "6cdc75ea-7cb1-472b-a1cb-20b22df303e6", 00:12:13.361 "strip_size_kb": 0, 00:12:13.361 "state": "online", 00:12:13.361 "raid_level": "raid1", 00:12:13.361 "superblock": true, 00:12:13.361 "num_base_bdevs": 4, 00:12:13.361 "num_base_bdevs_discovered": 4, 00:12:13.361 "num_base_bdevs_operational": 4, 00:12:13.361 "base_bdevs_list": [ 00:12:13.361 { 00:12:13.361 "name": "BaseBdev1", 00:12:13.361 "uuid": "84207beb-c34a-5d87-b667-ad8697a50aea", 00:12:13.361 "is_configured": true, 00:12:13.361 "data_offset": 2048, 00:12:13.361 "data_size": 63488 00:12:13.361 }, 00:12:13.361 { 00:12:13.361 "name": "BaseBdev2", 00:12:13.361 "uuid": "3be7f01f-c607-5ba0-b3c1-715ba455df6e", 00:12:13.361 "is_configured": true, 00:12:13.361 "data_offset": 2048, 00:12:13.361 "data_size": 63488 00:12:13.361 }, 00:12:13.361 { 00:12:13.361 "name": "BaseBdev3", 00:12:13.361 "uuid": "82a40180-8ddc-52dd-bea1-ec476c06025f", 00:12:13.361 "is_configured": true, 00:12:13.361 "data_offset": 2048, 00:12:13.361 "data_size": 63488 00:12:13.361 }, 00:12:13.361 { 00:12:13.361 "name": "BaseBdev4", 00:12:13.361 "uuid": "616e8b90-d9bf-58fb-ae1e-9ccc847ae540", 00:12:13.361 "is_configured": true, 00:12:13.361 "data_offset": 2048, 00:12:13.361 "data_size": 63488 00:12:13.361 } 00:12:13.361 ] 00:12:13.361 }' 00:12:13.361 12:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.361 12:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.930 12:04:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:13.930 12:04:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:13.930 [2024-11-19 12:04:17.223328] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:14.898 12:04:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:14.898 12:04:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.898 12:04:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.898 12:04:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.898 12:04:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:14.898 12:04:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:14.898 12:04:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:12:14.898 12:04:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:14.898 12:04:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:14.898 12:04:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:14.898 12:04:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:14.898 12:04:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:14.898 12:04:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:14.898 12:04:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:14.898 12:04:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.898 12:04:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.898 12:04:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.898 12:04:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.898 12:04:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.898 12:04:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.898 12:04:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.898 12:04:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:14.898 12:04:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.898 12:04:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.898 "name": "raid_bdev1", 00:12:14.898 "uuid": "6cdc75ea-7cb1-472b-a1cb-20b22df303e6", 00:12:14.898 "strip_size_kb": 0, 00:12:14.898 "state": "online", 00:12:14.898 "raid_level": "raid1", 00:12:14.898 "superblock": true, 00:12:14.898 "num_base_bdevs": 4, 00:12:14.898 "num_base_bdevs_discovered": 4, 00:12:14.898 "num_base_bdevs_operational": 4, 00:12:14.898 "base_bdevs_list": [ 00:12:14.898 { 00:12:14.898 "name": "BaseBdev1", 00:12:14.898 "uuid": "84207beb-c34a-5d87-b667-ad8697a50aea", 00:12:14.898 "is_configured": true, 00:12:14.898 "data_offset": 2048, 00:12:14.898 "data_size": 63488 00:12:14.898 }, 00:12:14.898 { 00:12:14.898 "name": "BaseBdev2", 00:12:14.898 "uuid": "3be7f01f-c607-5ba0-b3c1-715ba455df6e", 00:12:14.898 "is_configured": true, 00:12:14.898 "data_offset": 2048, 00:12:14.898 "data_size": 63488 00:12:14.898 }, 00:12:14.898 { 00:12:14.898 "name": "BaseBdev3", 00:12:14.898 "uuid": "82a40180-8ddc-52dd-bea1-ec476c06025f", 00:12:14.898 "is_configured": true, 00:12:14.898 "data_offset": 2048, 00:12:14.898 "data_size": 63488 00:12:14.898 }, 00:12:14.898 { 00:12:14.898 "name": "BaseBdev4", 00:12:14.898 "uuid": "616e8b90-d9bf-58fb-ae1e-9ccc847ae540", 00:12:14.898 "is_configured": true, 00:12:14.898 "data_offset": 2048, 00:12:14.898 "data_size": 63488 00:12:14.898 } 00:12:14.898 ] 00:12:14.898 }' 00:12:14.898 12:04:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.898 12:04:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.158 12:04:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:15.158 12:04:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.158 12:04:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.418 [2024-11-19 12:04:18.533856] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:15.418 [2024-11-19 12:04:18.533958] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:15.418 [2024-11-19 12:04:18.536791] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:15.418 [2024-11-19 12:04:18.536892] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:15.418 [2024-11-19 12:04:18.537045] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:15.418 [2024-11-19 12:04:18.537094] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:15.418 { 00:12:15.418 "results": [ 00:12:15.418 { 00:12:15.418 "job": "raid_bdev1", 00:12:15.418 "core_mask": "0x1", 00:12:15.418 "workload": "randrw", 00:12:15.418 "percentage": 50, 00:12:15.418 "status": "finished", 00:12:15.418 "queue_depth": 1, 00:12:15.418 "io_size": 131072, 00:12:15.418 "runtime": 1.311276, 00:12:15.418 "iops": 10855.07551423194, 00:12:15.418 "mibps": 1356.8844392789924, 00:12:15.418 "io_failed": 0, 00:12:15.418 "io_timeout": 0, 00:12:15.418 "avg_latency_us": 89.58722193554641, 00:12:15.418 "min_latency_us": 22.46986899563319, 00:12:15.418 "max_latency_us": 1559.6995633187773 00:12:15.418 } 00:12:15.418 ], 00:12:15.418 "core_count": 1 00:12:15.418 } 00:12:15.418 12:04:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.418 12:04:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75019 00:12:15.418 12:04:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 75019 ']' 00:12:15.418 12:04:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 75019 00:12:15.418 12:04:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:12:15.418 12:04:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:15.418 12:04:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75019 00:12:15.418 12:04:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:15.418 12:04:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:15.418 12:04:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75019' 00:12:15.418 killing process with pid 75019 00:12:15.418 12:04:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 75019 00:12:15.418 [2024-11-19 12:04:18.585182] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:15.418 12:04:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 75019 00:12:15.678 [2024-11-19 12:04:18.915159] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:17.059 12:04:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:17.059 12:04:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.fHtmDt72Md 00:12:17.059 12:04:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:17.059 12:04:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:17.059 12:04:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:17.059 ************************************ 00:12:17.059 END TEST raid_read_error_test 00:12:17.059 ************************************ 00:12:17.059 12:04:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:17.059 12:04:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:17.059 12:04:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:17.059 00:12:17.059 real 0m4.859s 00:12:17.059 user 0m5.754s 00:12:17.059 sys 0m0.583s 00:12:17.059 12:04:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:17.059 12:04:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.059 12:04:20 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:12:17.059 12:04:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:17.059 12:04:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:17.059 12:04:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:17.059 ************************************ 00:12:17.059 START TEST raid_write_error_test 00:12:17.059 ************************************ 00:12:17.059 12:04:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:12:17.059 12:04:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:17.059 12:04:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:17.059 12:04:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:17.059 12:04:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:17.059 12:04:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:17.059 12:04:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:17.059 12:04:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:17.059 12:04:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:17.059 12:04:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:17.059 12:04:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:17.059 12:04:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:17.059 12:04:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:17.059 12:04:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:17.059 12:04:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:17.059 12:04:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:17.059 12:04:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:17.059 12:04:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:17.059 12:04:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:17.059 12:04:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:17.059 12:04:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:17.059 12:04:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:17.059 12:04:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:17.059 12:04:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:17.059 12:04:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:17.059 12:04:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:17.059 12:04:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:17.059 12:04:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:17.059 12:04:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.oJh9lRXVMY 00:12:17.059 12:04:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75168 00:12:17.060 12:04:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:17.060 12:04:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75168 00:12:17.060 12:04:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 75168 ']' 00:12:17.060 12:04:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:17.060 12:04:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:17.060 12:04:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:17.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:17.060 12:04:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:17.060 12:04:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.319 [2024-11-19 12:04:20.447086] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:12:17.319 [2024-11-19 12:04:20.447301] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75168 ] 00:12:17.319 [2024-11-19 12:04:20.627541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:17.578 [2024-11-19 12:04:20.764983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:17.838 [2024-11-19 12:04:21.000530] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:17.838 [2024-11-19 12:04:21.000680] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:18.099 12:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:18.099 12:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:18.099 12:04:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:18.099 12:04:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:18.099 12:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.099 12:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.099 BaseBdev1_malloc 00:12:18.099 12:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.099 12:04:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:18.099 12:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.099 12:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.099 true 00:12:18.099 12:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.099 12:04:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:18.099 12:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.099 12:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.099 [2024-11-19 12:04:21.383265] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:18.099 [2024-11-19 12:04:21.383334] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:18.099 [2024-11-19 12:04:21.383358] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:18.099 [2024-11-19 12:04:21.383371] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:18.099 [2024-11-19 12:04:21.385840] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:18.099 [2024-11-19 12:04:21.385889] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:18.099 BaseBdev1 00:12:18.099 12:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.099 12:04:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:18.099 12:04:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:18.099 12:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.099 12:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.099 BaseBdev2_malloc 00:12:18.099 12:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.099 12:04:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:18.099 12:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.099 12:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.099 true 00:12:18.099 12:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.099 12:04:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:18.099 12:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.099 12:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.099 [2024-11-19 12:04:21.451337] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:18.099 [2024-11-19 12:04:21.451470] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:18.099 [2024-11-19 12:04:21.451498] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:18.099 [2024-11-19 12:04:21.451514] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:18.099 [2024-11-19 12:04:21.454086] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:18.099 [2024-11-19 12:04:21.454126] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:18.099 BaseBdev2 00:12:18.099 12:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.099 12:04:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:18.099 12:04:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:18.099 12:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.099 12:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.360 BaseBdev3_malloc 00:12:18.360 12:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.360 12:04:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:18.360 12:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.360 12:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.360 true 00:12:18.360 12:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.360 12:04:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:18.360 12:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.360 12:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.360 [2024-11-19 12:04:21.536433] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:18.360 [2024-11-19 12:04:21.536492] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:18.360 [2024-11-19 12:04:21.536513] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:18.360 [2024-11-19 12:04:21.536526] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:18.360 [2024-11-19 12:04:21.538952] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:18.360 [2024-11-19 12:04:21.539010] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:18.360 BaseBdev3 00:12:18.360 12:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.360 12:04:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:18.360 12:04:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:18.360 12:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.360 12:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.360 BaseBdev4_malloc 00:12:18.360 12:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.360 12:04:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:18.360 12:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.360 12:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.360 true 00:12:18.360 12:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.360 12:04:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:18.360 12:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.360 12:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.360 [2024-11-19 12:04:21.608962] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:18.360 [2024-11-19 12:04:21.609056] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:18.360 [2024-11-19 12:04:21.609078] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:18.360 [2024-11-19 12:04:21.609090] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:18.360 [2024-11-19 12:04:21.611504] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:18.360 [2024-11-19 12:04:21.611551] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:18.360 BaseBdev4 00:12:18.360 12:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.360 12:04:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:18.360 12:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.360 12:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.360 [2024-11-19 12:04:21.621008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:18.360 [2024-11-19 12:04:21.623234] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:18.360 [2024-11-19 12:04:21.623321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:18.360 [2024-11-19 12:04:21.623394] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:18.360 [2024-11-19 12:04:21.623668] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:18.360 [2024-11-19 12:04:21.623690] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:18.360 [2024-11-19 12:04:21.623970] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:18.360 [2024-11-19 12:04:21.624188] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:18.360 [2024-11-19 12:04:21.624241] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:18.360 [2024-11-19 12:04:21.624446] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:18.360 12:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.360 12:04:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:18.360 12:04:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:18.360 12:04:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:18.360 12:04:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:18.360 12:04:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:18.360 12:04:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:18.360 12:04:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.360 12:04:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.360 12:04:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.360 12:04:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.360 12:04:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.360 12:04:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:18.360 12:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.360 12:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.360 12:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.360 12:04:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.360 "name": "raid_bdev1", 00:12:18.360 "uuid": "7c61f758-0691-4d77-ae8e-844ad939b715", 00:12:18.360 "strip_size_kb": 0, 00:12:18.360 "state": "online", 00:12:18.360 "raid_level": "raid1", 00:12:18.360 "superblock": true, 00:12:18.360 "num_base_bdevs": 4, 00:12:18.360 "num_base_bdevs_discovered": 4, 00:12:18.360 "num_base_bdevs_operational": 4, 00:12:18.360 "base_bdevs_list": [ 00:12:18.360 { 00:12:18.360 "name": "BaseBdev1", 00:12:18.360 "uuid": "3dba5546-8d14-50c0-8696-d5563c8ca6ae", 00:12:18.360 "is_configured": true, 00:12:18.360 "data_offset": 2048, 00:12:18.360 "data_size": 63488 00:12:18.360 }, 00:12:18.360 { 00:12:18.360 "name": "BaseBdev2", 00:12:18.360 "uuid": "59cc4ea7-5804-525c-8b9f-aacc1d6ab33c", 00:12:18.360 "is_configured": true, 00:12:18.360 "data_offset": 2048, 00:12:18.360 "data_size": 63488 00:12:18.360 }, 00:12:18.360 { 00:12:18.360 "name": "BaseBdev3", 00:12:18.360 "uuid": "3e9129e3-ba74-521b-a744-02412689879d", 00:12:18.360 "is_configured": true, 00:12:18.360 "data_offset": 2048, 00:12:18.360 "data_size": 63488 00:12:18.360 }, 00:12:18.360 { 00:12:18.360 "name": "BaseBdev4", 00:12:18.360 "uuid": "23aa427b-a1ec-5405-b6e8-cf63721cbdc6", 00:12:18.360 "is_configured": true, 00:12:18.360 "data_offset": 2048, 00:12:18.360 "data_size": 63488 00:12:18.360 } 00:12:18.360 ] 00:12:18.360 }' 00:12:18.360 12:04:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.360 12:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.928 12:04:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:18.928 12:04:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:18.928 [2024-11-19 12:04:22.197710] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:19.868 12:04:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:19.868 12:04:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.868 12:04:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.868 [2024-11-19 12:04:23.118661] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:12:19.868 [2024-11-19 12:04:23.118817] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:19.868 [2024-11-19 12:04:23.119132] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:12:19.868 12:04:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.868 12:04:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:19.868 12:04:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:19.868 12:04:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:12:19.868 12:04:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:12:19.868 12:04:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:19.868 12:04:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:19.868 12:04:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:19.868 12:04:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:19.868 12:04:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:19.868 12:04:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:19.868 12:04:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.868 12:04:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.868 12:04:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.868 12:04:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.868 12:04:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.868 12:04:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:19.868 12:04:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.868 12:04:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.868 12:04:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.868 12:04:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.868 "name": "raid_bdev1", 00:12:19.868 "uuid": "7c61f758-0691-4d77-ae8e-844ad939b715", 00:12:19.868 "strip_size_kb": 0, 00:12:19.868 "state": "online", 00:12:19.868 "raid_level": "raid1", 00:12:19.868 "superblock": true, 00:12:19.868 "num_base_bdevs": 4, 00:12:19.868 "num_base_bdevs_discovered": 3, 00:12:19.868 "num_base_bdevs_operational": 3, 00:12:19.868 "base_bdevs_list": [ 00:12:19.868 { 00:12:19.868 "name": null, 00:12:19.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.868 "is_configured": false, 00:12:19.868 "data_offset": 0, 00:12:19.868 "data_size": 63488 00:12:19.868 }, 00:12:19.868 { 00:12:19.868 "name": "BaseBdev2", 00:12:19.868 "uuid": "59cc4ea7-5804-525c-8b9f-aacc1d6ab33c", 00:12:19.868 "is_configured": true, 00:12:19.868 "data_offset": 2048, 00:12:19.868 "data_size": 63488 00:12:19.868 }, 00:12:19.868 { 00:12:19.868 "name": "BaseBdev3", 00:12:19.868 "uuid": "3e9129e3-ba74-521b-a744-02412689879d", 00:12:19.868 "is_configured": true, 00:12:19.868 "data_offset": 2048, 00:12:19.868 "data_size": 63488 00:12:19.868 }, 00:12:19.868 { 00:12:19.868 "name": "BaseBdev4", 00:12:19.868 "uuid": "23aa427b-a1ec-5405-b6e8-cf63721cbdc6", 00:12:19.868 "is_configured": true, 00:12:19.868 "data_offset": 2048, 00:12:19.868 "data_size": 63488 00:12:19.868 } 00:12:19.868 ] 00:12:19.868 }' 00:12:19.868 12:04:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.868 12:04:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.439 12:04:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:20.439 12:04:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.439 12:04:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.439 [2024-11-19 12:04:23.616303] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:20.439 [2024-11-19 12:04:23.616339] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:20.439 [2024-11-19 12:04:23.619146] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:20.439 [2024-11-19 12:04:23.619257] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:20.439 [2024-11-19 12:04:23.619405] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:20.439 [2024-11-19 12:04:23.619421] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:20.439 { 00:12:20.439 "results": [ 00:12:20.439 { 00:12:20.439 "job": "raid_bdev1", 00:12:20.439 "core_mask": "0x1", 00:12:20.439 "workload": "randrw", 00:12:20.439 "percentage": 50, 00:12:20.439 "status": "finished", 00:12:20.439 "queue_depth": 1, 00:12:20.439 "io_size": 131072, 00:12:20.439 "runtime": 1.419008, 00:12:20.439 "iops": 10188.807955980516, 00:12:20.439 "mibps": 1273.6009944975644, 00:12:20.439 "io_failed": 0, 00:12:20.439 "io_timeout": 0, 00:12:20.439 "avg_latency_us": 94.89304082718743, 00:12:20.439 "min_latency_us": 23.923144104803495, 00:12:20.439 "max_latency_us": 1452.380786026201 00:12:20.439 } 00:12:20.439 ], 00:12:20.439 "core_count": 1 00:12:20.439 } 00:12:20.439 12:04:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.439 12:04:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75168 00:12:20.439 12:04:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 75168 ']' 00:12:20.439 12:04:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 75168 00:12:20.439 12:04:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:12:20.439 12:04:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:20.439 12:04:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75168 00:12:20.439 12:04:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:20.439 killing process with pid 75168 00:12:20.439 12:04:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:20.439 12:04:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75168' 00:12:20.439 12:04:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 75168 00:12:20.439 [2024-11-19 12:04:23.665917] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:20.439 12:04:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 75168 00:12:20.706 [2024-11-19 12:04:24.012300] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:22.086 12:04:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.oJh9lRXVMY 00:12:22.086 12:04:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:22.086 12:04:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:22.086 12:04:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:22.086 12:04:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:22.086 12:04:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:22.086 12:04:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:22.086 ************************************ 00:12:22.086 END TEST raid_write_error_test 00:12:22.086 ************************************ 00:12:22.086 12:04:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:22.086 00:12:22.086 real 0m4.829s 00:12:22.086 user 0m5.786s 00:12:22.086 sys 0m0.615s 00:12:22.086 12:04:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:22.086 12:04:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.086 12:04:25 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:12:22.086 12:04:25 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:12:22.086 12:04:25 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:12:22.086 12:04:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:22.086 12:04:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:22.086 12:04:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:22.086 ************************************ 00:12:22.086 START TEST raid_rebuild_test 00:12:22.086 ************************************ 00:12:22.086 12:04:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:12:22.086 12:04:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:22.086 12:04:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:22.087 12:04:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:22.087 12:04:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:22.087 12:04:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:22.087 12:04:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:22.087 12:04:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:22.087 12:04:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:22.087 12:04:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:22.087 12:04:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:22.087 12:04:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:22.087 12:04:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:22.087 12:04:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:22.087 12:04:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:22.087 12:04:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:22.087 12:04:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:22.087 12:04:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:22.087 12:04:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:22.087 12:04:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:22.087 12:04:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:22.087 12:04:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:22.087 12:04:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:22.087 12:04:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:22.087 12:04:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75310 00:12:22.087 12:04:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75310 00:12:22.087 12:04:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 75310 ']' 00:12:22.087 12:04:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:22.087 12:04:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:22.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:22.087 12:04:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:22.087 12:04:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:22.087 12:04:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:22.087 12:04:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.087 [2024-11-19 12:04:25.304371] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:12:22.087 [2024-11-19 12:04:25.304582] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:12:22.087 Zero copy mechanism will not be used. 00:12:22.087 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75310 ] 00:12:22.346 [2024-11-19 12:04:25.477365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:22.346 [2024-11-19 12:04:25.617003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.605 [2024-11-19 12:04:25.861645] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:22.605 [2024-11-19 12:04:25.861782] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:22.864 12:04:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:22.864 12:04:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:12:22.864 12:04:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:22.864 12:04:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:22.864 12:04:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.864 12:04:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.864 BaseBdev1_malloc 00:12:22.864 12:04:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.864 12:04:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:22.864 12:04:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.864 12:04:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.864 [2024-11-19 12:04:26.198172] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:22.864 [2024-11-19 12:04:26.198296] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:22.864 [2024-11-19 12:04:26.198326] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:22.864 [2024-11-19 12:04:26.198337] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:22.864 [2024-11-19 12:04:26.200441] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:22.864 [2024-11-19 12:04:26.200482] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:22.864 BaseBdev1 00:12:22.864 12:04:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.864 12:04:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:22.864 12:04:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:22.864 12:04:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.864 12:04:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.124 BaseBdev2_malloc 00:12:23.124 12:04:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.124 12:04:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:23.124 12:04:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.124 12:04:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.124 [2024-11-19 12:04:26.253207] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:23.124 [2024-11-19 12:04:26.253265] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:23.124 [2024-11-19 12:04:26.253284] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:23.124 [2024-11-19 12:04:26.253294] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:23.124 [2024-11-19 12:04:26.255335] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:23.124 [2024-11-19 12:04:26.255376] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:23.124 BaseBdev2 00:12:23.124 12:04:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.124 12:04:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:23.124 12:04:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.124 12:04:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.124 spare_malloc 00:12:23.124 12:04:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.124 12:04:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:23.124 12:04:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.124 12:04:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.124 spare_delay 00:12:23.124 12:04:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.124 12:04:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:23.124 12:04:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.124 12:04:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.124 [2024-11-19 12:04:26.326823] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:23.124 [2024-11-19 12:04:26.326879] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:23.124 [2024-11-19 12:04:26.326897] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:23.124 [2024-11-19 12:04:26.326907] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:23.124 [2024-11-19 12:04:26.329090] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:23.124 [2024-11-19 12:04:26.329128] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:23.124 spare 00:12:23.124 12:04:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.124 12:04:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:23.124 12:04:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.124 12:04:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.124 [2024-11-19 12:04:26.338855] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:23.124 [2024-11-19 12:04:26.340608] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:23.124 [2024-11-19 12:04:26.340690] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:23.124 [2024-11-19 12:04:26.340704] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:23.124 [2024-11-19 12:04:26.340930] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:23.124 [2024-11-19 12:04:26.341100] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:23.124 [2024-11-19 12:04:26.341113] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:23.124 [2024-11-19 12:04:26.341262] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:23.124 12:04:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.124 12:04:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:23.124 12:04:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:23.124 12:04:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:23.124 12:04:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:23.124 12:04:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:23.124 12:04:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:23.124 12:04:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.124 12:04:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.124 12:04:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.124 12:04:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.124 12:04:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.124 12:04:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.124 12:04:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.124 12:04:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.124 12:04:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.124 12:04:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.124 "name": "raid_bdev1", 00:12:23.124 "uuid": "51af0778-cdb1-4451-98af-ce4c1703b547", 00:12:23.124 "strip_size_kb": 0, 00:12:23.124 "state": "online", 00:12:23.124 "raid_level": "raid1", 00:12:23.124 "superblock": false, 00:12:23.124 "num_base_bdevs": 2, 00:12:23.124 "num_base_bdevs_discovered": 2, 00:12:23.124 "num_base_bdevs_operational": 2, 00:12:23.124 "base_bdevs_list": [ 00:12:23.124 { 00:12:23.124 "name": "BaseBdev1", 00:12:23.124 "uuid": "35dddbc3-f2bd-5230-ad8d-3e86aacf1fe3", 00:12:23.124 "is_configured": true, 00:12:23.124 "data_offset": 0, 00:12:23.124 "data_size": 65536 00:12:23.124 }, 00:12:23.124 { 00:12:23.124 "name": "BaseBdev2", 00:12:23.124 "uuid": "9d2808e6-7676-5e3d-a6aa-061c11dab65f", 00:12:23.124 "is_configured": true, 00:12:23.124 "data_offset": 0, 00:12:23.124 "data_size": 65536 00:12:23.124 } 00:12:23.124 ] 00:12:23.124 }' 00:12:23.124 12:04:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.124 12:04:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.694 12:04:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:23.694 12:04:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:23.694 12:04:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.694 12:04:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.694 [2024-11-19 12:04:26.842402] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:23.694 12:04:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.694 12:04:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:23.694 12:04:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.694 12:04:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:23.694 12:04:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.694 12:04:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.694 12:04:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.694 12:04:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:23.694 12:04:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:23.694 12:04:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:23.694 12:04:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:23.694 12:04:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:23.694 12:04:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:23.694 12:04:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:23.694 12:04:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:23.694 12:04:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:23.694 12:04:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:23.694 12:04:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:23.694 12:04:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:23.694 12:04:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:23.694 12:04:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:23.954 [2024-11-19 12:04:27.161586] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:23.954 /dev/nbd0 00:12:23.954 12:04:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:23.954 12:04:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:23.954 12:04:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:23.954 12:04:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:23.954 12:04:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:23.954 12:04:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:23.954 12:04:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:23.954 12:04:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:23.954 12:04:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:23.954 12:04:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:23.954 12:04:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:23.954 1+0 records in 00:12:23.954 1+0 records out 00:12:23.954 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000247049 s, 16.6 MB/s 00:12:23.954 12:04:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:23.954 12:04:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:23.954 12:04:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:23.954 12:04:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:23.954 12:04:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:23.954 12:04:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:23.954 12:04:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:23.954 12:04:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:23.954 12:04:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:23.954 12:04:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:12:29.226 65536+0 records in 00:12:29.226 65536+0 records out 00:12:29.226 33554432 bytes (34 MB, 32 MiB) copied, 4.36483 s, 7.7 MB/s 00:12:29.226 12:04:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:29.226 12:04:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:29.226 12:04:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:29.226 12:04:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:29.226 12:04:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:29.226 12:04:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:29.226 12:04:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:29.226 12:04:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:29.226 [2024-11-19 12:04:31.788589] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:29.226 12:04:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:29.226 12:04:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:29.226 12:04:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:29.226 12:04:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:29.226 12:04:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:29.226 12:04:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:29.226 12:04:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:29.226 12:04:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:29.226 12:04:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.226 12:04:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.226 [2024-11-19 12:04:31.797166] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:29.226 12:04:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.226 12:04:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:29.226 12:04:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:29.226 12:04:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:29.226 12:04:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:29.226 12:04:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:29.226 12:04:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:29.227 12:04:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.227 12:04:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.227 12:04:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.227 12:04:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.227 12:04:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.227 12:04:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.227 12:04:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.227 12:04:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.227 12:04:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.227 12:04:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.227 "name": "raid_bdev1", 00:12:29.227 "uuid": "51af0778-cdb1-4451-98af-ce4c1703b547", 00:12:29.227 "strip_size_kb": 0, 00:12:29.227 "state": "online", 00:12:29.227 "raid_level": "raid1", 00:12:29.227 "superblock": false, 00:12:29.227 "num_base_bdevs": 2, 00:12:29.227 "num_base_bdevs_discovered": 1, 00:12:29.227 "num_base_bdevs_operational": 1, 00:12:29.227 "base_bdevs_list": [ 00:12:29.227 { 00:12:29.227 "name": null, 00:12:29.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.227 "is_configured": false, 00:12:29.227 "data_offset": 0, 00:12:29.227 "data_size": 65536 00:12:29.227 }, 00:12:29.227 { 00:12:29.227 "name": "BaseBdev2", 00:12:29.227 "uuid": "9d2808e6-7676-5e3d-a6aa-061c11dab65f", 00:12:29.227 "is_configured": true, 00:12:29.227 "data_offset": 0, 00:12:29.227 "data_size": 65536 00:12:29.227 } 00:12:29.227 ] 00:12:29.227 }' 00:12:29.227 12:04:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.227 12:04:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.227 12:04:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:29.227 12:04:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.227 12:04:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.227 [2024-11-19 12:04:32.248418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:29.227 [2024-11-19 12:04:32.264330] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:12:29.227 12:04:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.227 12:04:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:29.227 [2024-11-19 12:04:32.266104] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:30.165 12:04:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:30.165 12:04:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:30.165 12:04:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:30.165 12:04:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:30.166 12:04:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:30.166 12:04:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.166 12:04:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.166 12:04:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.166 12:04:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.166 12:04:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.166 12:04:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:30.166 "name": "raid_bdev1", 00:12:30.166 "uuid": "51af0778-cdb1-4451-98af-ce4c1703b547", 00:12:30.166 "strip_size_kb": 0, 00:12:30.166 "state": "online", 00:12:30.166 "raid_level": "raid1", 00:12:30.166 "superblock": false, 00:12:30.166 "num_base_bdevs": 2, 00:12:30.166 "num_base_bdevs_discovered": 2, 00:12:30.166 "num_base_bdevs_operational": 2, 00:12:30.166 "process": { 00:12:30.166 "type": "rebuild", 00:12:30.166 "target": "spare", 00:12:30.166 "progress": { 00:12:30.166 "blocks": 20480, 00:12:30.166 "percent": 31 00:12:30.166 } 00:12:30.166 }, 00:12:30.166 "base_bdevs_list": [ 00:12:30.166 { 00:12:30.166 "name": "spare", 00:12:30.166 "uuid": "1c0bf41f-228d-5618-8e55-5086ec305412", 00:12:30.166 "is_configured": true, 00:12:30.166 "data_offset": 0, 00:12:30.166 "data_size": 65536 00:12:30.166 }, 00:12:30.166 { 00:12:30.166 "name": "BaseBdev2", 00:12:30.166 "uuid": "9d2808e6-7676-5e3d-a6aa-061c11dab65f", 00:12:30.166 "is_configured": true, 00:12:30.166 "data_offset": 0, 00:12:30.166 "data_size": 65536 00:12:30.166 } 00:12:30.166 ] 00:12:30.166 }' 00:12:30.166 12:04:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:30.166 12:04:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:30.166 12:04:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:30.166 12:04:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:30.166 12:04:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:30.166 12:04:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.166 12:04:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.166 [2024-11-19 12:04:33.429497] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:30.166 [2024-11-19 12:04:33.471090] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:30.166 [2024-11-19 12:04:33.471174] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:30.166 [2024-11-19 12:04:33.471202] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:30.166 [2024-11-19 12:04:33.471212] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:30.166 12:04:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.166 12:04:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:30.166 12:04:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:30.166 12:04:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:30.166 12:04:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:30.166 12:04:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:30.166 12:04:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:30.166 12:04:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.166 12:04:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.166 12:04:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.166 12:04:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.166 12:04:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.166 12:04:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.166 12:04:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.166 12:04:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.166 12:04:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.425 12:04:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.425 "name": "raid_bdev1", 00:12:30.425 "uuid": "51af0778-cdb1-4451-98af-ce4c1703b547", 00:12:30.425 "strip_size_kb": 0, 00:12:30.425 "state": "online", 00:12:30.425 "raid_level": "raid1", 00:12:30.425 "superblock": false, 00:12:30.425 "num_base_bdevs": 2, 00:12:30.425 "num_base_bdevs_discovered": 1, 00:12:30.425 "num_base_bdevs_operational": 1, 00:12:30.425 "base_bdevs_list": [ 00:12:30.425 { 00:12:30.425 "name": null, 00:12:30.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.425 "is_configured": false, 00:12:30.425 "data_offset": 0, 00:12:30.425 "data_size": 65536 00:12:30.425 }, 00:12:30.425 { 00:12:30.425 "name": "BaseBdev2", 00:12:30.425 "uuid": "9d2808e6-7676-5e3d-a6aa-061c11dab65f", 00:12:30.425 "is_configured": true, 00:12:30.425 "data_offset": 0, 00:12:30.425 "data_size": 65536 00:12:30.425 } 00:12:30.425 ] 00:12:30.425 }' 00:12:30.425 12:04:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.425 12:04:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.685 12:04:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:30.685 12:04:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:30.685 12:04:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:30.685 12:04:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:30.685 12:04:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:30.685 12:04:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.685 12:04:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.685 12:04:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.685 12:04:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.685 12:04:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.685 12:04:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:30.685 "name": "raid_bdev1", 00:12:30.685 "uuid": "51af0778-cdb1-4451-98af-ce4c1703b547", 00:12:30.685 "strip_size_kb": 0, 00:12:30.685 "state": "online", 00:12:30.685 "raid_level": "raid1", 00:12:30.685 "superblock": false, 00:12:30.685 "num_base_bdevs": 2, 00:12:30.685 "num_base_bdevs_discovered": 1, 00:12:30.685 "num_base_bdevs_operational": 1, 00:12:30.685 "base_bdevs_list": [ 00:12:30.685 { 00:12:30.685 "name": null, 00:12:30.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.685 "is_configured": false, 00:12:30.685 "data_offset": 0, 00:12:30.685 "data_size": 65536 00:12:30.685 }, 00:12:30.685 { 00:12:30.685 "name": "BaseBdev2", 00:12:30.685 "uuid": "9d2808e6-7676-5e3d-a6aa-061c11dab65f", 00:12:30.685 "is_configured": true, 00:12:30.685 "data_offset": 0, 00:12:30.685 "data_size": 65536 00:12:30.685 } 00:12:30.685 ] 00:12:30.685 }' 00:12:30.685 12:04:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:30.945 12:04:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:30.945 12:04:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:30.945 12:04:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:30.945 12:04:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:30.945 12:04:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.945 12:04:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.945 [2024-11-19 12:04:34.124805] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:30.945 [2024-11-19 12:04:34.140646] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:12:30.945 12:04:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.945 12:04:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:30.945 [2024-11-19 12:04:34.142498] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:31.885 12:04:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:31.885 12:04:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:31.885 12:04:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:31.885 12:04:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:31.885 12:04:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:31.885 12:04:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.885 12:04:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.885 12:04:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.885 12:04:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.885 12:04:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.885 12:04:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:31.885 "name": "raid_bdev1", 00:12:31.885 "uuid": "51af0778-cdb1-4451-98af-ce4c1703b547", 00:12:31.885 "strip_size_kb": 0, 00:12:31.885 "state": "online", 00:12:31.885 "raid_level": "raid1", 00:12:31.885 "superblock": false, 00:12:31.885 "num_base_bdevs": 2, 00:12:31.885 "num_base_bdevs_discovered": 2, 00:12:31.885 "num_base_bdevs_operational": 2, 00:12:31.885 "process": { 00:12:31.885 "type": "rebuild", 00:12:31.885 "target": "spare", 00:12:31.885 "progress": { 00:12:31.885 "blocks": 20480, 00:12:31.885 "percent": 31 00:12:31.885 } 00:12:31.885 }, 00:12:31.885 "base_bdevs_list": [ 00:12:31.885 { 00:12:31.885 "name": "spare", 00:12:31.885 "uuid": "1c0bf41f-228d-5618-8e55-5086ec305412", 00:12:31.885 "is_configured": true, 00:12:31.885 "data_offset": 0, 00:12:31.885 "data_size": 65536 00:12:31.885 }, 00:12:31.885 { 00:12:31.885 "name": "BaseBdev2", 00:12:31.885 "uuid": "9d2808e6-7676-5e3d-a6aa-061c11dab65f", 00:12:31.885 "is_configured": true, 00:12:31.885 "data_offset": 0, 00:12:31.885 "data_size": 65536 00:12:31.885 } 00:12:31.885 ] 00:12:31.885 }' 00:12:31.885 12:04:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:31.885 12:04:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:31.885 12:04:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:32.145 12:04:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:32.145 12:04:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:32.145 12:04:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:32.145 12:04:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:32.145 12:04:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:32.145 12:04:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=368 00:12:32.145 12:04:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:32.145 12:04:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:32.145 12:04:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:32.145 12:04:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:32.145 12:04:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:32.145 12:04:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:32.145 12:04:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.145 12:04:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.145 12:04:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.145 12:04:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.145 12:04:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.145 12:04:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:32.145 "name": "raid_bdev1", 00:12:32.145 "uuid": "51af0778-cdb1-4451-98af-ce4c1703b547", 00:12:32.145 "strip_size_kb": 0, 00:12:32.145 "state": "online", 00:12:32.145 "raid_level": "raid1", 00:12:32.145 "superblock": false, 00:12:32.145 "num_base_bdevs": 2, 00:12:32.145 "num_base_bdevs_discovered": 2, 00:12:32.145 "num_base_bdevs_operational": 2, 00:12:32.145 "process": { 00:12:32.145 "type": "rebuild", 00:12:32.145 "target": "spare", 00:12:32.145 "progress": { 00:12:32.145 "blocks": 22528, 00:12:32.145 "percent": 34 00:12:32.145 } 00:12:32.145 }, 00:12:32.145 "base_bdevs_list": [ 00:12:32.145 { 00:12:32.145 "name": "spare", 00:12:32.145 "uuid": "1c0bf41f-228d-5618-8e55-5086ec305412", 00:12:32.145 "is_configured": true, 00:12:32.145 "data_offset": 0, 00:12:32.145 "data_size": 65536 00:12:32.145 }, 00:12:32.145 { 00:12:32.145 "name": "BaseBdev2", 00:12:32.145 "uuid": "9d2808e6-7676-5e3d-a6aa-061c11dab65f", 00:12:32.145 "is_configured": true, 00:12:32.145 "data_offset": 0, 00:12:32.145 "data_size": 65536 00:12:32.145 } 00:12:32.145 ] 00:12:32.145 }' 00:12:32.145 12:04:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:32.145 12:04:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:32.145 12:04:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:32.145 12:04:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:32.145 12:04:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:33.525 12:04:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:33.525 12:04:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:33.525 12:04:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:33.525 12:04:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:33.525 12:04:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:33.525 12:04:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:33.525 12:04:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.525 12:04:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.525 12:04:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.525 12:04:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.525 12:04:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.525 12:04:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:33.525 "name": "raid_bdev1", 00:12:33.525 "uuid": "51af0778-cdb1-4451-98af-ce4c1703b547", 00:12:33.525 "strip_size_kb": 0, 00:12:33.525 "state": "online", 00:12:33.525 "raid_level": "raid1", 00:12:33.525 "superblock": false, 00:12:33.525 "num_base_bdevs": 2, 00:12:33.525 "num_base_bdevs_discovered": 2, 00:12:33.525 "num_base_bdevs_operational": 2, 00:12:33.525 "process": { 00:12:33.525 "type": "rebuild", 00:12:33.525 "target": "spare", 00:12:33.525 "progress": { 00:12:33.525 "blocks": 47104, 00:12:33.525 "percent": 71 00:12:33.525 } 00:12:33.525 }, 00:12:33.525 "base_bdevs_list": [ 00:12:33.525 { 00:12:33.525 "name": "spare", 00:12:33.525 "uuid": "1c0bf41f-228d-5618-8e55-5086ec305412", 00:12:33.525 "is_configured": true, 00:12:33.525 "data_offset": 0, 00:12:33.525 "data_size": 65536 00:12:33.525 }, 00:12:33.525 { 00:12:33.525 "name": "BaseBdev2", 00:12:33.525 "uuid": "9d2808e6-7676-5e3d-a6aa-061c11dab65f", 00:12:33.525 "is_configured": true, 00:12:33.525 "data_offset": 0, 00:12:33.525 "data_size": 65536 00:12:33.525 } 00:12:33.525 ] 00:12:33.525 }' 00:12:33.525 12:04:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:33.525 12:04:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:33.525 12:04:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:33.525 12:04:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:33.525 12:04:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:34.094 [2024-11-19 12:04:37.356315] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:34.094 [2024-11-19 12:04:37.356399] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:34.094 [2024-11-19 12:04:37.356448] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:34.353 12:04:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:34.353 12:04:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:34.353 12:04:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:34.353 12:04:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:34.353 12:04:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:34.353 12:04:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:34.353 12:04:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.353 12:04:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.353 12:04:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.353 12:04:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.353 12:04:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.353 12:04:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:34.353 "name": "raid_bdev1", 00:12:34.353 "uuid": "51af0778-cdb1-4451-98af-ce4c1703b547", 00:12:34.353 "strip_size_kb": 0, 00:12:34.353 "state": "online", 00:12:34.353 "raid_level": "raid1", 00:12:34.353 "superblock": false, 00:12:34.353 "num_base_bdevs": 2, 00:12:34.353 "num_base_bdevs_discovered": 2, 00:12:34.353 "num_base_bdevs_operational": 2, 00:12:34.353 "base_bdevs_list": [ 00:12:34.353 { 00:12:34.353 "name": "spare", 00:12:34.353 "uuid": "1c0bf41f-228d-5618-8e55-5086ec305412", 00:12:34.354 "is_configured": true, 00:12:34.354 "data_offset": 0, 00:12:34.354 "data_size": 65536 00:12:34.354 }, 00:12:34.354 { 00:12:34.354 "name": "BaseBdev2", 00:12:34.354 "uuid": "9d2808e6-7676-5e3d-a6aa-061c11dab65f", 00:12:34.354 "is_configured": true, 00:12:34.354 "data_offset": 0, 00:12:34.354 "data_size": 65536 00:12:34.354 } 00:12:34.354 ] 00:12:34.354 }' 00:12:34.354 12:04:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:34.354 12:04:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:34.354 12:04:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:34.613 12:04:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:34.613 12:04:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:12:34.613 12:04:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:34.613 12:04:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:34.613 12:04:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:34.613 12:04:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:34.613 12:04:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:34.613 12:04:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.613 12:04:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.613 12:04:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.613 12:04:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.613 12:04:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.613 12:04:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:34.613 "name": "raid_bdev1", 00:12:34.613 "uuid": "51af0778-cdb1-4451-98af-ce4c1703b547", 00:12:34.613 "strip_size_kb": 0, 00:12:34.613 "state": "online", 00:12:34.613 "raid_level": "raid1", 00:12:34.613 "superblock": false, 00:12:34.613 "num_base_bdevs": 2, 00:12:34.613 "num_base_bdevs_discovered": 2, 00:12:34.613 "num_base_bdevs_operational": 2, 00:12:34.613 "base_bdevs_list": [ 00:12:34.613 { 00:12:34.613 "name": "spare", 00:12:34.613 "uuid": "1c0bf41f-228d-5618-8e55-5086ec305412", 00:12:34.613 "is_configured": true, 00:12:34.613 "data_offset": 0, 00:12:34.613 "data_size": 65536 00:12:34.613 }, 00:12:34.613 { 00:12:34.613 "name": "BaseBdev2", 00:12:34.613 "uuid": "9d2808e6-7676-5e3d-a6aa-061c11dab65f", 00:12:34.613 "is_configured": true, 00:12:34.613 "data_offset": 0, 00:12:34.613 "data_size": 65536 00:12:34.613 } 00:12:34.613 ] 00:12:34.613 }' 00:12:34.613 12:04:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:34.613 12:04:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:34.613 12:04:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:34.613 12:04:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:34.613 12:04:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:34.613 12:04:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:34.613 12:04:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:34.613 12:04:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:34.613 12:04:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:34.613 12:04:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:34.613 12:04:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.613 12:04:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.613 12:04:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.613 12:04:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.613 12:04:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.613 12:04:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.613 12:04:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.613 12:04:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.613 12:04:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.613 12:04:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.613 "name": "raid_bdev1", 00:12:34.613 "uuid": "51af0778-cdb1-4451-98af-ce4c1703b547", 00:12:34.613 "strip_size_kb": 0, 00:12:34.613 "state": "online", 00:12:34.613 "raid_level": "raid1", 00:12:34.613 "superblock": false, 00:12:34.613 "num_base_bdevs": 2, 00:12:34.613 "num_base_bdevs_discovered": 2, 00:12:34.613 "num_base_bdevs_operational": 2, 00:12:34.613 "base_bdevs_list": [ 00:12:34.613 { 00:12:34.613 "name": "spare", 00:12:34.613 "uuid": "1c0bf41f-228d-5618-8e55-5086ec305412", 00:12:34.613 "is_configured": true, 00:12:34.613 "data_offset": 0, 00:12:34.613 "data_size": 65536 00:12:34.613 }, 00:12:34.613 { 00:12:34.613 "name": "BaseBdev2", 00:12:34.613 "uuid": "9d2808e6-7676-5e3d-a6aa-061c11dab65f", 00:12:34.613 "is_configured": true, 00:12:34.613 "data_offset": 0, 00:12:34.613 "data_size": 65536 00:12:34.613 } 00:12:34.613 ] 00:12:34.613 }' 00:12:34.613 12:04:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.613 12:04:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.200 12:04:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:35.200 12:04:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.200 12:04:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.200 [2024-11-19 12:04:38.303371] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:35.200 [2024-11-19 12:04:38.303483] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:35.200 [2024-11-19 12:04:38.303602] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:35.200 [2024-11-19 12:04:38.303714] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:35.200 [2024-11-19 12:04:38.303770] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:35.200 12:04:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.200 12:04:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.200 12:04:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:12:35.200 12:04:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.200 12:04:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.200 12:04:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.200 12:04:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:35.200 12:04:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:35.200 12:04:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:35.200 12:04:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:35.200 12:04:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:35.200 12:04:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:35.200 12:04:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:35.200 12:04:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:35.200 12:04:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:35.200 12:04:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:35.200 12:04:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:35.200 12:04:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:35.200 12:04:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:35.200 /dev/nbd0 00:12:35.460 12:04:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:35.460 12:04:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:35.460 12:04:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:35.460 12:04:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:35.460 12:04:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:35.460 12:04:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:35.460 12:04:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:35.460 12:04:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:35.460 12:04:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:35.460 12:04:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:35.460 12:04:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:35.460 1+0 records in 00:12:35.460 1+0 records out 00:12:35.460 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000394818 s, 10.4 MB/s 00:12:35.460 12:04:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:35.460 12:04:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:35.460 12:04:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:35.460 12:04:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:35.460 12:04:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:35.460 12:04:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:35.460 12:04:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:35.460 12:04:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:35.460 /dev/nbd1 00:12:35.720 12:04:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:35.720 12:04:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:35.720 12:04:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:35.720 12:04:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:35.720 12:04:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:35.720 12:04:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:35.720 12:04:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:35.720 12:04:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:35.720 12:04:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:35.720 12:04:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:35.720 12:04:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:35.720 1+0 records in 00:12:35.720 1+0 records out 00:12:35.720 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000325508 s, 12.6 MB/s 00:12:35.720 12:04:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:35.720 12:04:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:35.720 12:04:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:35.720 12:04:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:35.720 12:04:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:35.720 12:04:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:35.720 12:04:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:35.720 12:04:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:35.720 12:04:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:35.720 12:04:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:35.720 12:04:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:35.720 12:04:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:35.720 12:04:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:35.720 12:04:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:35.720 12:04:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:35.979 12:04:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:35.979 12:04:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:35.979 12:04:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:35.979 12:04:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:35.979 12:04:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:35.979 12:04:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:35.979 12:04:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:35.979 12:04:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:35.979 12:04:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:35.980 12:04:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:36.240 12:04:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:36.240 12:04:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:36.240 12:04:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:36.240 12:04:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:36.240 12:04:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:36.240 12:04:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:36.240 12:04:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:36.240 12:04:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:36.240 12:04:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:36.240 12:04:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75310 00:12:36.240 12:04:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 75310 ']' 00:12:36.240 12:04:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 75310 00:12:36.240 12:04:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:12:36.240 12:04:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:36.240 12:04:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75310 00:12:36.240 12:04:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:36.240 12:04:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:36.240 12:04:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75310' 00:12:36.240 killing process with pid 75310 00:12:36.240 Received shutdown signal, test time was about 60.000000 seconds 00:12:36.240 00:12:36.240 Latency(us) 00:12:36.240 [2024-11-19T12:04:39.617Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:36.240 [2024-11-19T12:04:39.617Z] =================================================================================================================== 00:12:36.240 [2024-11-19T12:04:39.617Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:36.240 12:04:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 75310 00:12:36.240 [2024-11-19 12:04:39.528468] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:36.240 12:04:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 75310 00:12:36.500 [2024-11-19 12:04:39.841169] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:37.880 12:04:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:12:37.880 00:12:37.880 real 0m15.785s 00:12:37.880 user 0m18.034s 00:12:37.880 sys 0m3.007s 00:12:37.880 12:04:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:37.880 ************************************ 00:12:37.880 END TEST raid_rebuild_test 00:12:37.880 ************************************ 00:12:37.880 12:04:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.880 12:04:41 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:12:37.880 12:04:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:37.880 12:04:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:37.880 12:04:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:37.880 ************************************ 00:12:37.880 START TEST raid_rebuild_test_sb 00:12:37.880 ************************************ 00:12:37.880 12:04:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:12:37.880 12:04:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:37.880 12:04:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:37.880 12:04:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:37.880 12:04:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:37.880 12:04:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:37.880 12:04:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:37.880 12:04:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:37.880 12:04:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:37.880 12:04:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:37.880 12:04:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:37.880 12:04:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:37.880 12:04:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:37.880 12:04:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:37.880 12:04:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:37.880 12:04:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:37.880 12:04:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:37.880 12:04:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:37.880 12:04:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:37.880 12:04:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:37.880 12:04:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:37.880 12:04:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:37.880 12:04:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:37.880 12:04:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:37.880 12:04:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:37.880 12:04:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=75734 00:12:37.880 12:04:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 75734 00:12:37.880 12:04:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:37.880 12:04:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 75734 ']' 00:12:37.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:37.880 12:04:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:37.880 12:04:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:37.880 12:04:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:37.880 12:04:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:37.880 12:04:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.880 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:37.880 Zero copy mechanism will not be used. 00:12:37.880 [2024-11-19 12:04:41.169190] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:12:37.880 [2024-11-19 12:04:41.169320] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75734 ] 00:12:38.140 [2024-11-19 12:04:41.347736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:38.140 [2024-11-19 12:04:41.482215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.399 [2024-11-19 12:04:41.709627] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:38.399 [2024-11-19 12:04:41.709693] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:38.659 12:04:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:38.659 12:04:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:38.659 12:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:38.659 12:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:38.659 12:04:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.659 12:04:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.919 BaseBdev1_malloc 00:12:38.919 12:04:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.919 12:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:38.919 12:04:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.919 12:04:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.919 [2024-11-19 12:04:42.075672] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:38.919 [2024-11-19 12:04:42.075763] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:38.919 [2024-11-19 12:04:42.075794] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:38.919 [2024-11-19 12:04:42.075809] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:38.919 [2024-11-19 12:04:42.078271] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:38.919 [2024-11-19 12:04:42.078358] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:38.919 BaseBdev1 00:12:38.919 12:04:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.919 12:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:38.919 12:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:38.919 12:04:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.919 12:04:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.919 BaseBdev2_malloc 00:12:38.919 12:04:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.919 12:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:38.919 12:04:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.919 12:04:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.919 [2024-11-19 12:04:42.136882] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:38.919 [2024-11-19 12:04:42.136972] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:38.919 [2024-11-19 12:04:42.136997] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:38.919 [2024-11-19 12:04:42.137029] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:38.919 [2024-11-19 12:04:42.139419] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:38.919 [2024-11-19 12:04:42.139464] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:38.919 BaseBdev2 00:12:38.919 12:04:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.919 12:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:38.919 12:04:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.919 12:04:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.919 spare_malloc 00:12:38.919 12:04:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.919 12:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:38.919 12:04:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.919 12:04:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.919 spare_delay 00:12:38.919 12:04:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.919 12:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:38.919 12:04:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.919 12:04:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.919 [2024-11-19 12:04:42.243846] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:38.919 [2024-11-19 12:04:42.243929] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:38.919 [2024-11-19 12:04:42.243953] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:38.919 [2024-11-19 12:04:42.243966] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:38.919 [2024-11-19 12:04:42.246337] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:38.919 [2024-11-19 12:04:42.246440] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:38.919 spare 00:12:38.919 12:04:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.920 12:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:38.920 12:04:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.920 12:04:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.920 [2024-11-19 12:04:42.255907] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:38.920 [2024-11-19 12:04:42.257891] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:38.920 [2024-11-19 12:04:42.258176] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:38.920 [2024-11-19 12:04:42.258201] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:38.920 [2024-11-19 12:04:42.258443] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:38.920 [2024-11-19 12:04:42.258630] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:38.920 [2024-11-19 12:04:42.258640] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:38.920 [2024-11-19 12:04:42.258799] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:38.920 12:04:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.920 12:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:38.920 12:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:38.920 12:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:38.920 12:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:38.920 12:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:38.920 12:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:38.920 12:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.920 12:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.920 12:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.920 12:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.920 12:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.920 12:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.920 12:04:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.920 12:04:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.920 12:04:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.180 12:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:39.180 "name": "raid_bdev1", 00:12:39.180 "uuid": "ccde09dd-764e-408a-bf44-d6840b685a47", 00:12:39.180 "strip_size_kb": 0, 00:12:39.180 "state": "online", 00:12:39.180 "raid_level": "raid1", 00:12:39.180 "superblock": true, 00:12:39.180 "num_base_bdevs": 2, 00:12:39.180 "num_base_bdevs_discovered": 2, 00:12:39.180 "num_base_bdevs_operational": 2, 00:12:39.180 "base_bdevs_list": [ 00:12:39.180 { 00:12:39.180 "name": "BaseBdev1", 00:12:39.180 "uuid": "875b3de9-9091-5400-9afb-121f8349fb51", 00:12:39.180 "is_configured": true, 00:12:39.180 "data_offset": 2048, 00:12:39.180 "data_size": 63488 00:12:39.180 }, 00:12:39.180 { 00:12:39.180 "name": "BaseBdev2", 00:12:39.180 "uuid": "81628c65-789c-5c88-b324-3a2b2c851ca9", 00:12:39.180 "is_configured": true, 00:12:39.180 "data_offset": 2048, 00:12:39.180 "data_size": 63488 00:12:39.180 } 00:12:39.180 ] 00:12:39.180 }' 00:12:39.180 12:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:39.180 12:04:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.440 12:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:39.440 12:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:39.440 12:04:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.440 12:04:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.440 [2024-11-19 12:04:42.695561] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:39.440 12:04:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.441 12:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:39.441 12:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.441 12:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:39.441 12:04:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.441 12:04:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.441 12:04:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.441 12:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:39.441 12:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:39.441 12:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:39.441 12:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:39.441 12:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:39.441 12:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:39.441 12:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:39.441 12:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:39.441 12:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:39.441 12:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:39.441 12:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:39.441 12:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:39.441 12:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:39.441 12:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:39.700 [2024-11-19 12:04:42.963039] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:39.700 /dev/nbd0 00:12:39.700 12:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:39.700 12:04:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:39.700 12:04:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:39.700 12:04:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:39.700 12:04:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:39.700 12:04:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:39.700 12:04:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:39.700 12:04:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:39.700 12:04:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:39.700 12:04:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:39.700 12:04:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:39.700 1+0 records in 00:12:39.700 1+0 records out 00:12:39.700 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000241595 s, 17.0 MB/s 00:12:39.700 12:04:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:39.700 12:04:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:39.700 12:04:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:39.700 12:04:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:39.700 12:04:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:39.700 12:04:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:39.700 12:04:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:39.700 12:04:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:39.700 12:04:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:39.700 12:04:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:12:44.979 63488+0 records in 00:12:44.979 63488+0 records out 00:12:44.979 32505856 bytes (33 MB, 31 MiB) copied, 4.51927 s, 7.2 MB/s 00:12:44.979 12:04:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:44.979 12:04:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:44.979 12:04:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:44.979 12:04:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:44.979 12:04:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:44.979 12:04:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:44.979 12:04:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:44.979 12:04:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:44.979 [2024-11-19 12:04:47.766094] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:44.979 12:04:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:44.979 12:04:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:44.979 12:04:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:44.979 12:04:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:44.979 12:04:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:44.979 12:04:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:44.979 12:04:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:44.979 12:04:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:44.979 12:04:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.979 12:04:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.979 [2024-11-19 12:04:47.780907] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:44.979 12:04:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.979 12:04:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:44.979 12:04:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:44.979 12:04:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:44.979 12:04:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:44.979 12:04:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:44.979 12:04:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:44.979 12:04:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.979 12:04:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.979 12:04:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.979 12:04:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.979 12:04:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.979 12:04:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:44.979 12:04:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.979 12:04:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.979 12:04:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.979 12:04:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.979 "name": "raid_bdev1", 00:12:44.979 "uuid": "ccde09dd-764e-408a-bf44-d6840b685a47", 00:12:44.979 "strip_size_kb": 0, 00:12:44.979 "state": "online", 00:12:44.979 "raid_level": "raid1", 00:12:44.979 "superblock": true, 00:12:44.979 "num_base_bdevs": 2, 00:12:44.979 "num_base_bdevs_discovered": 1, 00:12:44.979 "num_base_bdevs_operational": 1, 00:12:44.979 "base_bdevs_list": [ 00:12:44.979 { 00:12:44.979 "name": null, 00:12:44.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.979 "is_configured": false, 00:12:44.979 "data_offset": 0, 00:12:44.979 "data_size": 63488 00:12:44.979 }, 00:12:44.979 { 00:12:44.979 "name": "BaseBdev2", 00:12:44.979 "uuid": "81628c65-789c-5c88-b324-3a2b2c851ca9", 00:12:44.979 "is_configured": true, 00:12:44.979 "data_offset": 2048, 00:12:44.979 "data_size": 63488 00:12:44.979 } 00:12:44.979 ] 00:12:44.979 }' 00:12:44.979 12:04:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.979 12:04:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.979 12:04:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:44.979 12:04:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.979 12:04:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.979 [2024-11-19 12:04:48.208165] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:44.979 [2024-11-19 12:04:48.225520] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:12:44.979 12:04:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.979 [2024-11-19 12:04:48.227431] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:44.979 12:04:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:45.919 12:04:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:45.919 12:04:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:45.919 12:04:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:45.919 12:04:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:45.919 12:04:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:45.919 12:04:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.919 12:04:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.919 12:04:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.919 12:04:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.919 12:04:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.919 12:04:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:45.919 "name": "raid_bdev1", 00:12:45.919 "uuid": "ccde09dd-764e-408a-bf44-d6840b685a47", 00:12:45.919 "strip_size_kb": 0, 00:12:45.919 "state": "online", 00:12:45.919 "raid_level": "raid1", 00:12:45.919 "superblock": true, 00:12:45.919 "num_base_bdevs": 2, 00:12:45.919 "num_base_bdevs_discovered": 2, 00:12:45.919 "num_base_bdevs_operational": 2, 00:12:45.919 "process": { 00:12:45.919 "type": "rebuild", 00:12:45.919 "target": "spare", 00:12:45.919 "progress": { 00:12:45.919 "blocks": 20480, 00:12:45.919 "percent": 32 00:12:45.919 } 00:12:45.919 }, 00:12:45.919 "base_bdevs_list": [ 00:12:45.919 { 00:12:45.919 "name": "spare", 00:12:45.919 "uuid": "14b9e310-efa6-50e9-93ec-0a86c5693662", 00:12:45.919 "is_configured": true, 00:12:45.919 "data_offset": 2048, 00:12:45.919 "data_size": 63488 00:12:45.919 }, 00:12:45.919 { 00:12:45.919 "name": "BaseBdev2", 00:12:45.919 "uuid": "81628c65-789c-5c88-b324-3a2b2c851ca9", 00:12:45.919 "is_configured": true, 00:12:45.919 "data_offset": 2048, 00:12:45.919 "data_size": 63488 00:12:45.919 } 00:12:45.919 ] 00:12:45.919 }' 00:12:45.919 12:04:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:46.179 12:04:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:46.179 12:04:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:46.179 12:04:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:46.179 12:04:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:46.179 12:04:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.179 12:04:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.179 [2024-11-19 12:04:49.343223] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:46.179 [2024-11-19 12:04:49.432735] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:46.179 [2024-11-19 12:04:49.432794] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:46.179 [2024-11-19 12:04:49.432808] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:46.179 [2024-11-19 12:04:49.432817] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:46.179 12:04:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.180 12:04:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:46.180 12:04:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:46.180 12:04:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:46.180 12:04:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:46.180 12:04:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:46.180 12:04:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:46.180 12:04:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.180 12:04:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.180 12:04:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.180 12:04:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.180 12:04:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.180 12:04:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.180 12:04:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.180 12:04:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.180 12:04:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.180 12:04:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.180 "name": "raid_bdev1", 00:12:46.180 "uuid": "ccde09dd-764e-408a-bf44-d6840b685a47", 00:12:46.180 "strip_size_kb": 0, 00:12:46.180 "state": "online", 00:12:46.180 "raid_level": "raid1", 00:12:46.180 "superblock": true, 00:12:46.180 "num_base_bdevs": 2, 00:12:46.180 "num_base_bdevs_discovered": 1, 00:12:46.180 "num_base_bdevs_operational": 1, 00:12:46.180 "base_bdevs_list": [ 00:12:46.180 { 00:12:46.180 "name": null, 00:12:46.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.180 "is_configured": false, 00:12:46.180 "data_offset": 0, 00:12:46.180 "data_size": 63488 00:12:46.180 }, 00:12:46.180 { 00:12:46.180 "name": "BaseBdev2", 00:12:46.180 "uuid": "81628c65-789c-5c88-b324-3a2b2c851ca9", 00:12:46.180 "is_configured": true, 00:12:46.180 "data_offset": 2048, 00:12:46.180 "data_size": 63488 00:12:46.180 } 00:12:46.180 ] 00:12:46.180 }' 00:12:46.180 12:04:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.180 12:04:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.750 12:04:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:46.750 12:04:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:46.750 12:04:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:46.750 12:04:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:46.750 12:04:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:46.750 12:04:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.750 12:04:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.750 12:04:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.750 12:04:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.750 12:04:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.750 12:04:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:46.750 "name": "raid_bdev1", 00:12:46.750 "uuid": "ccde09dd-764e-408a-bf44-d6840b685a47", 00:12:46.750 "strip_size_kb": 0, 00:12:46.750 "state": "online", 00:12:46.750 "raid_level": "raid1", 00:12:46.750 "superblock": true, 00:12:46.750 "num_base_bdevs": 2, 00:12:46.750 "num_base_bdevs_discovered": 1, 00:12:46.750 "num_base_bdevs_operational": 1, 00:12:46.750 "base_bdevs_list": [ 00:12:46.750 { 00:12:46.750 "name": null, 00:12:46.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.750 "is_configured": false, 00:12:46.750 "data_offset": 0, 00:12:46.750 "data_size": 63488 00:12:46.750 }, 00:12:46.750 { 00:12:46.750 "name": "BaseBdev2", 00:12:46.750 "uuid": "81628c65-789c-5c88-b324-3a2b2c851ca9", 00:12:46.750 "is_configured": true, 00:12:46.750 "data_offset": 2048, 00:12:46.750 "data_size": 63488 00:12:46.750 } 00:12:46.750 ] 00:12:46.750 }' 00:12:46.750 12:04:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:46.750 12:04:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:46.750 12:04:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:46.750 12:04:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:46.750 12:04:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:46.750 12:04:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.750 12:04:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.750 [2024-11-19 12:04:50.066541] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:46.750 [2024-11-19 12:04:50.082912] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:12:46.750 12:04:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.750 12:04:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:46.750 [2024-11-19 12:04:50.084858] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:47.814 12:04:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:47.814 12:04:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:47.814 12:04:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:47.814 12:04:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:47.814 12:04:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:47.814 12:04:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.814 12:04:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.814 12:04:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.814 12:04:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.814 12:04:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.814 12:04:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:47.814 "name": "raid_bdev1", 00:12:47.814 "uuid": "ccde09dd-764e-408a-bf44-d6840b685a47", 00:12:47.814 "strip_size_kb": 0, 00:12:47.814 "state": "online", 00:12:47.814 "raid_level": "raid1", 00:12:47.814 "superblock": true, 00:12:47.814 "num_base_bdevs": 2, 00:12:47.814 "num_base_bdevs_discovered": 2, 00:12:47.814 "num_base_bdevs_operational": 2, 00:12:47.814 "process": { 00:12:47.814 "type": "rebuild", 00:12:47.814 "target": "spare", 00:12:47.814 "progress": { 00:12:47.814 "blocks": 20480, 00:12:47.814 "percent": 32 00:12:47.814 } 00:12:47.814 }, 00:12:47.814 "base_bdevs_list": [ 00:12:47.814 { 00:12:47.814 "name": "spare", 00:12:47.814 "uuid": "14b9e310-efa6-50e9-93ec-0a86c5693662", 00:12:47.814 "is_configured": true, 00:12:47.814 "data_offset": 2048, 00:12:47.814 "data_size": 63488 00:12:47.814 }, 00:12:47.814 { 00:12:47.814 "name": "BaseBdev2", 00:12:47.814 "uuid": "81628c65-789c-5c88-b324-3a2b2c851ca9", 00:12:47.814 "is_configured": true, 00:12:47.814 "data_offset": 2048, 00:12:47.814 "data_size": 63488 00:12:47.814 } 00:12:47.814 ] 00:12:47.814 }' 00:12:47.814 12:04:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:48.074 12:04:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:48.074 12:04:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:48.074 12:04:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:48.074 12:04:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:48.074 12:04:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:48.074 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:48.074 12:04:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:48.074 12:04:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:48.074 12:04:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:48.074 12:04:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=384 00:12:48.074 12:04:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:48.074 12:04:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:48.074 12:04:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:48.074 12:04:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:48.074 12:04:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:48.074 12:04:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:48.074 12:04:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.074 12:04:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.074 12:04:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.074 12:04:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.075 12:04:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.075 12:04:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:48.075 "name": "raid_bdev1", 00:12:48.075 "uuid": "ccde09dd-764e-408a-bf44-d6840b685a47", 00:12:48.075 "strip_size_kb": 0, 00:12:48.075 "state": "online", 00:12:48.075 "raid_level": "raid1", 00:12:48.075 "superblock": true, 00:12:48.075 "num_base_bdevs": 2, 00:12:48.075 "num_base_bdevs_discovered": 2, 00:12:48.075 "num_base_bdevs_operational": 2, 00:12:48.075 "process": { 00:12:48.075 "type": "rebuild", 00:12:48.075 "target": "spare", 00:12:48.075 "progress": { 00:12:48.075 "blocks": 22528, 00:12:48.075 "percent": 35 00:12:48.075 } 00:12:48.075 }, 00:12:48.075 "base_bdevs_list": [ 00:12:48.075 { 00:12:48.075 "name": "spare", 00:12:48.075 "uuid": "14b9e310-efa6-50e9-93ec-0a86c5693662", 00:12:48.075 "is_configured": true, 00:12:48.075 "data_offset": 2048, 00:12:48.075 "data_size": 63488 00:12:48.075 }, 00:12:48.075 { 00:12:48.075 "name": "BaseBdev2", 00:12:48.075 "uuid": "81628c65-789c-5c88-b324-3a2b2c851ca9", 00:12:48.075 "is_configured": true, 00:12:48.075 "data_offset": 2048, 00:12:48.075 "data_size": 63488 00:12:48.075 } 00:12:48.075 ] 00:12:48.075 }' 00:12:48.075 12:04:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:48.075 12:04:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:48.075 12:04:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:48.075 12:04:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:48.075 12:04:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:49.455 12:04:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:49.455 12:04:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:49.455 12:04:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:49.455 12:04:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:49.455 12:04:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:49.455 12:04:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:49.455 12:04:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.455 12:04:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:49.455 12:04:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.455 12:04:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.455 12:04:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.455 12:04:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:49.455 "name": "raid_bdev1", 00:12:49.455 "uuid": "ccde09dd-764e-408a-bf44-d6840b685a47", 00:12:49.455 "strip_size_kb": 0, 00:12:49.455 "state": "online", 00:12:49.455 "raid_level": "raid1", 00:12:49.455 "superblock": true, 00:12:49.455 "num_base_bdevs": 2, 00:12:49.455 "num_base_bdevs_discovered": 2, 00:12:49.455 "num_base_bdevs_operational": 2, 00:12:49.455 "process": { 00:12:49.455 "type": "rebuild", 00:12:49.455 "target": "spare", 00:12:49.455 "progress": { 00:12:49.455 "blocks": 47104, 00:12:49.455 "percent": 74 00:12:49.455 } 00:12:49.455 }, 00:12:49.455 "base_bdevs_list": [ 00:12:49.455 { 00:12:49.455 "name": "spare", 00:12:49.455 "uuid": "14b9e310-efa6-50e9-93ec-0a86c5693662", 00:12:49.455 "is_configured": true, 00:12:49.455 "data_offset": 2048, 00:12:49.455 "data_size": 63488 00:12:49.455 }, 00:12:49.455 { 00:12:49.455 "name": "BaseBdev2", 00:12:49.455 "uuid": "81628c65-789c-5c88-b324-3a2b2c851ca9", 00:12:49.455 "is_configured": true, 00:12:49.455 "data_offset": 2048, 00:12:49.455 "data_size": 63488 00:12:49.455 } 00:12:49.455 ] 00:12:49.455 }' 00:12:49.455 12:04:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:49.455 12:04:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:49.455 12:04:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:49.455 12:04:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:49.455 12:04:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:50.025 [2024-11-19 12:04:53.198169] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:50.025 [2024-11-19 12:04:53.198235] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:50.025 [2024-11-19 12:04:53.198335] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:50.283 12:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:50.283 12:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:50.283 12:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:50.283 12:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:50.283 12:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:50.283 12:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:50.283 12:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.283 12:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.283 12:04:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.284 12:04:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.284 12:04:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.284 12:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:50.284 "name": "raid_bdev1", 00:12:50.284 "uuid": "ccde09dd-764e-408a-bf44-d6840b685a47", 00:12:50.284 "strip_size_kb": 0, 00:12:50.284 "state": "online", 00:12:50.284 "raid_level": "raid1", 00:12:50.284 "superblock": true, 00:12:50.284 "num_base_bdevs": 2, 00:12:50.284 "num_base_bdevs_discovered": 2, 00:12:50.284 "num_base_bdevs_operational": 2, 00:12:50.284 "base_bdevs_list": [ 00:12:50.284 { 00:12:50.284 "name": "spare", 00:12:50.284 "uuid": "14b9e310-efa6-50e9-93ec-0a86c5693662", 00:12:50.284 "is_configured": true, 00:12:50.284 "data_offset": 2048, 00:12:50.284 "data_size": 63488 00:12:50.284 }, 00:12:50.284 { 00:12:50.284 "name": "BaseBdev2", 00:12:50.284 "uuid": "81628c65-789c-5c88-b324-3a2b2c851ca9", 00:12:50.284 "is_configured": true, 00:12:50.284 "data_offset": 2048, 00:12:50.284 "data_size": 63488 00:12:50.284 } 00:12:50.284 ] 00:12:50.284 }' 00:12:50.284 12:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:50.284 12:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:50.284 12:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:50.543 12:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:50.543 12:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:12:50.543 12:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:50.543 12:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:50.543 12:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:50.543 12:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:50.543 12:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:50.543 12:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.543 12:04:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.543 12:04:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.543 12:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.543 12:04:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.543 12:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:50.543 "name": "raid_bdev1", 00:12:50.543 "uuid": "ccde09dd-764e-408a-bf44-d6840b685a47", 00:12:50.543 "strip_size_kb": 0, 00:12:50.543 "state": "online", 00:12:50.543 "raid_level": "raid1", 00:12:50.543 "superblock": true, 00:12:50.543 "num_base_bdevs": 2, 00:12:50.543 "num_base_bdevs_discovered": 2, 00:12:50.543 "num_base_bdevs_operational": 2, 00:12:50.543 "base_bdevs_list": [ 00:12:50.543 { 00:12:50.543 "name": "spare", 00:12:50.543 "uuid": "14b9e310-efa6-50e9-93ec-0a86c5693662", 00:12:50.543 "is_configured": true, 00:12:50.543 "data_offset": 2048, 00:12:50.543 "data_size": 63488 00:12:50.543 }, 00:12:50.543 { 00:12:50.543 "name": "BaseBdev2", 00:12:50.543 "uuid": "81628c65-789c-5c88-b324-3a2b2c851ca9", 00:12:50.543 "is_configured": true, 00:12:50.543 "data_offset": 2048, 00:12:50.543 "data_size": 63488 00:12:50.543 } 00:12:50.543 ] 00:12:50.543 }' 00:12:50.543 12:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:50.543 12:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:50.543 12:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:50.543 12:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:50.543 12:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:50.543 12:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:50.543 12:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:50.543 12:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:50.543 12:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:50.543 12:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:50.543 12:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.543 12:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.543 12:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.543 12:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.543 12:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.543 12:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.543 12:04:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.543 12:04:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.543 12:04:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.543 12:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.543 "name": "raid_bdev1", 00:12:50.543 "uuid": "ccde09dd-764e-408a-bf44-d6840b685a47", 00:12:50.543 "strip_size_kb": 0, 00:12:50.543 "state": "online", 00:12:50.543 "raid_level": "raid1", 00:12:50.543 "superblock": true, 00:12:50.543 "num_base_bdevs": 2, 00:12:50.543 "num_base_bdevs_discovered": 2, 00:12:50.543 "num_base_bdevs_operational": 2, 00:12:50.543 "base_bdevs_list": [ 00:12:50.543 { 00:12:50.543 "name": "spare", 00:12:50.543 "uuid": "14b9e310-efa6-50e9-93ec-0a86c5693662", 00:12:50.543 "is_configured": true, 00:12:50.543 "data_offset": 2048, 00:12:50.543 "data_size": 63488 00:12:50.543 }, 00:12:50.543 { 00:12:50.543 "name": "BaseBdev2", 00:12:50.543 "uuid": "81628c65-789c-5c88-b324-3a2b2c851ca9", 00:12:50.543 "is_configured": true, 00:12:50.543 "data_offset": 2048, 00:12:50.543 "data_size": 63488 00:12:50.543 } 00:12:50.543 ] 00:12:50.543 }' 00:12:50.543 12:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.543 12:04:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.112 12:04:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:51.113 12:04:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.113 12:04:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.113 [2024-11-19 12:04:54.324554] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:51.113 [2024-11-19 12:04:54.324657] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:51.113 [2024-11-19 12:04:54.324762] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:51.113 [2024-11-19 12:04:54.324849] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:51.113 [2024-11-19 12:04:54.324895] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:51.113 12:04:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.113 12:04:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.113 12:04:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:12:51.113 12:04:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.113 12:04:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.113 12:04:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.113 12:04:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:51.113 12:04:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:51.113 12:04:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:51.113 12:04:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:51.113 12:04:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:51.113 12:04:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:51.113 12:04:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:51.113 12:04:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:51.113 12:04:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:51.113 12:04:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:51.113 12:04:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:51.113 12:04:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:51.113 12:04:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:51.372 /dev/nbd0 00:12:51.372 12:04:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:51.372 12:04:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:51.372 12:04:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:51.372 12:04:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:51.372 12:04:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:51.372 12:04:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:51.372 12:04:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:51.372 12:04:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:51.372 12:04:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:51.372 12:04:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:51.372 12:04:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:51.372 1+0 records in 00:12:51.373 1+0 records out 00:12:51.373 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000564436 s, 7.3 MB/s 00:12:51.373 12:04:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.373 12:04:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:51.373 12:04:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.373 12:04:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:51.373 12:04:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:51.373 12:04:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:51.373 12:04:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:51.373 12:04:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:51.632 /dev/nbd1 00:12:51.632 12:04:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:51.632 12:04:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:51.632 12:04:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:51.632 12:04:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:51.632 12:04:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:51.632 12:04:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:51.632 12:04:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:51.632 12:04:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:51.632 12:04:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:51.632 12:04:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:51.632 12:04:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:51.632 1+0 records in 00:12:51.632 1+0 records out 00:12:51.632 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000420482 s, 9.7 MB/s 00:12:51.632 12:04:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.632 12:04:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:51.632 12:04:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.632 12:04:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:51.632 12:04:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:51.632 12:04:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:51.632 12:04:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:51.632 12:04:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:51.891 12:04:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:51.891 12:04:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:51.891 12:04:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:51.891 12:04:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:51.891 12:04:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:51.891 12:04:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:51.891 12:04:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:52.151 12:04:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:52.151 12:04:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:52.151 12:04:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:52.151 12:04:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:52.151 12:04:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:52.151 12:04:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:52.151 12:04:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:52.151 12:04:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:52.151 12:04:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:52.151 12:04:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:52.151 12:04:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:52.151 12:04:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:52.151 12:04:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:52.151 12:04:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:52.151 12:04:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:52.151 12:04:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:52.151 12:04:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:52.151 12:04:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:52.411 12:04:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:52.411 12:04:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:52.411 12:04:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.411 12:04:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.411 12:04:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.411 12:04:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:52.411 12:04:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.411 12:04:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.411 [2024-11-19 12:04:55.545408] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:52.411 [2024-11-19 12:04:55.545568] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:52.411 [2024-11-19 12:04:55.545622] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:52.411 [2024-11-19 12:04:55.545664] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:52.411 [2024-11-19 12:04:55.548421] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:52.411 [2024-11-19 12:04:55.548518] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:52.411 [2024-11-19 12:04:55.548675] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:52.411 [2024-11-19 12:04:55.548778] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:52.411 [2024-11-19 12:04:55.549032] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:52.411 spare 00:12:52.411 12:04:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.411 12:04:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:52.411 12:04:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.411 12:04:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.411 [2024-11-19 12:04:55.649018] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:12:52.411 [2024-11-19 12:04:55.649131] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:52.411 [2024-11-19 12:04:55.649530] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:12:52.411 [2024-11-19 12:04:55.649814] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:12:52.411 [2024-11-19 12:04:55.649872] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:12:52.411 [2024-11-19 12:04:55.650185] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:52.411 12:04:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.411 12:04:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:52.411 12:04:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:52.411 12:04:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:52.411 12:04:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:52.411 12:04:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:52.411 12:04:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:52.411 12:04:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.411 12:04:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.411 12:04:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.411 12:04:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.411 12:04:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.411 12:04:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.411 12:04:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.411 12:04:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.411 12:04:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.411 12:04:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.411 "name": "raid_bdev1", 00:12:52.411 "uuid": "ccde09dd-764e-408a-bf44-d6840b685a47", 00:12:52.411 "strip_size_kb": 0, 00:12:52.411 "state": "online", 00:12:52.411 "raid_level": "raid1", 00:12:52.411 "superblock": true, 00:12:52.411 "num_base_bdevs": 2, 00:12:52.411 "num_base_bdevs_discovered": 2, 00:12:52.411 "num_base_bdevs_operational": 2, 00:12:52.411 "base_bdevs_list": [ 00:12:52.411 { 00:12:52.412 "name": "spare", 00:12:52.412 "uuid": "14b9e310-efa6-50e9-93ec-0a86c5693662", 00:12:52.412 "is_configured": true, 00:12:52.412 "data_offset": 2048, 00:12:52.412 "data_size": 63488 00:12:52.412 }, 00:12:52.412 { 00:12:52.412 "name": "BaseBdev2", 00:12:52.412 "uuid": "81628c65-789c-5c88-b324-3a2b2c851ca9", 00:12:52.412 "is_configured": true, 00:12:52.412 "data_offset": 2048, 00:12:52.412 "data_size": 63488 00:12:52.412 } 00:12:52.412 ] 00:12:52.412 }' 00:12:52.412 12:04:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.412 12:04:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.981 12:04:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:52.981 12:04:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:52.981 12:04:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:52.981 12:04:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:52.981 12:04:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:52.981 12:04:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.981 12:04:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.981 12:04:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.981 12:04:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.981 12:04:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.981 12:04:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:52.981 "name": "raid_bdev1", 00:12:52.981 "uuid": "ccde09dd-764e-408a-bf44-d6840b685a47", 00:12:52.981 "strip_size_kb": 0, 00:12:52.981 "state": "online", 00:12:52.981 "raid_level": "raid1", 00:12:52.981 "superblock": true, 00:12:52.981 "num_base_bdevs": 2, 00:12:52.981 "num_base_bdevs_discovered": 2, 00:12:52.981 "num_base_bdevs_operational": 2, 00:12:52.981 "base_bdevs_list": [ 00:12:52.981 { 00:12:52.981 "name": "spare", 00:12:52.981 "uuid": "14b9e310-efa6-50e9-93ec-0a86c5693662", 00:12:52.981 "is_configured": true, 00:12:52.981 "data_offset": 2048, 00:12:52.981 "data_size": 63488 00:12:52.981 }, 00:12:52.981 { 00:12:52.981 "name": "BaseBdev2", 00:12:52.981 "uuid": "81628c65-789c-5c88-b324-3a2b2c851ca9", 00:12:52.981 "is_configured": true, 00:12:52.981 "data_offset": 2048, 00:12:52.981 "data_size": 63488 00:12:52.982 } 00:12:52.982 ] 00:12:52.982 }' 00:12:52.982 12:04:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:52.982 12:04:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:52.982 12:04:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:52.982 12:04:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:52.982 12:04:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:52.982 12:04:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.982 12:04:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.982 12:04:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.982 12:04:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.982 12:04:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:52.982 12:04:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:52.982 12:04:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.982 12:04:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.982 [2024-11-19 12:04:56.269213] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:52.982 12:04:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.982 12:04:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:52.982 12:04:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:52.982 12:04:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:52.982 12:04:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:52.982 12:04:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:52.982 12:04:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:52.982 12:04:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.982 12:04:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.982 12:04:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.982 12:04:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.982 12:04:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.982 12:04:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.982 12:04:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.982 12:04:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.982 12:04:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.982 12:04:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.982 "name": "raid_bdev1", 00:12:52.982 "uuid": "ccde09dd-764e-408a-bf44-d6840b685a47", 00:12:52.982 "strip_size_kb": 0, 00:12:52.982 "state": "online", 00:12:52.982 "raid_level": "raid1", 00:12:52.982 "superblock": true, 00:12:52.982 "num_base_bdevs": 2, 00:12:52.982 "num_base_bdevs_discovered": 1, 00:12:52.982 "num_base_bdevs_operational": 1, 00:12:52.982 "base_bdevs_list": [ 00:12:52.982 { 00:12:52.982 "name": null, 00:12:52.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.982 "is_configured": false, 00:12:52.982 "data_offset": 0, 00:12:52.982 "data_size": 63488 00:12:52.982 }, 00:12:52.982 { 00:12:52.982 "name": "BaseBdev2", 00:12:52.982 "uuid": "81628c65-789c-5c88-b324-3a2b2c851ca9", 00:12:52.982 "is_configured": true, 00:12:52.982 "data_offset": 2048, 00:12:52.982 "data_size": 63488 00:12:52.982 } 00:12:52.982 ] 00:12:52.982 }' 00:12:52.982 12:04:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.982 12:04:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.551 12:04:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:53.551 12:04:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.551 12:04:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.551 [2024-11-19 12:04:56.733161] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:53.551 [2024-11-19 12:04:56.733498] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:53.551 [2024-11-19 12:04:56.733582] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:53.551 [2024-11-19 12:04:56.733672] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:53.551 [2024-11-19 12:04:56.751785] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:12:53.551 12:04:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.551 12:04:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:53.551 [2024-11-19 12:04:56.754089] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:54.491 12:04:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:54.491 12:04:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:54.491 12:04:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:54.491 12:04:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:54.491 12:04:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:54.491 12:04:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.491 12:04:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.491 12:04:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.491 12:04:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.491 12:04:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.491 12:04:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:54.491 "name": "raid_bdev1", 00:12:54.491 "uuid": "ccde09dd-764e-408a-bf44-d6840b685a47", 00:12:54.491 "strip_size_kb": 0, 00:12:54.491 "state": "online", 00:12:54.491 "raid_level": "raid1", 00:12:54.491 "superblock": true, 00:12:54.491 "num_base_bdevs": 2, 00:12:54.491 "num_base_bdevs_discovered": 2, 00:12:54.491 "num_base_bdevs_operational": 2, 00:12:54.491 "process": { 00:12:54.491 "type": "rebuild", 00:12:54.491 "target": "spare", 00:12:54.491 "progress": { 00:12:54.491 "blocks": 20480, 00:12:54.491 "percent": 32 00:12:54.491 } 00:12:54.491 }, 00:12:54.491 "base_bdevs_list": [ 00:12:54.491 { 00:12:54.491 "name": "spare", 00:12:54.491 "uuid": "14b9e310-efa6-50e9-93ec-0a86c5693662", 00:12:54.491 "is_configured": true, 00:12:54.491 "data_offset": 2048, 00:12:54.491 "data_size": 63488 00:12:54.491 }, 00:12:54.491 { 00:12:54.491 "name": "BaseBdev2", 00:12:54.491 "uuid": "81628c65-789c-5c88-b324-3a2b2c851ca9", 00:12:54.491 "is_configured": true, 00:12:54.491 "data_offset": 2048, 00:12:54.491 "data_size": 63488 00:12:54.491 } 00:12:54.491 ] 00:12:54.491 }' 00:12:54.491 12:04:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:54.491 12:04:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:54.491 12:04:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:54.751 12:04:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:54.751 12:04:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:54.751 12:04:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.751 12:04:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.751 [2024-11-19 12:04:57.906477] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:54.751 [2024-11-19 12:04:57.964103] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:54.751 [2024-11-19 12:04:57.964248] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:54.751 [2024-11-19 12:04:57.964296] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:54.751 [2024-11-19 12:04:57.964328] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:54.751 12:04:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.751 12:04:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:54.751 12:04:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:54.751 12:04:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:54.751 12:04:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:54.751 12:04:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:54.751 12:04:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:54.751 12:04:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:54.751 12:04:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:54.751 12:04:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:54.751 12:04:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:54.751 12:04:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.751 12:04:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.751 12:04:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.751 12:04:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.751 12:04:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.751 12:04:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:54.751 "name": "raid_bdev1", 00:12:54.751 "uuid": "ccde09dd-764e-408a-bf44-d6840b685a47", 00:12:54.752 "strip_size_kb": 0, 00:12:54.752 "state": "online", 00:12:54.752 "raid_level": "raid1", 00:12:54.752 "superblock": true, 00:12:54.752 "num_base_bdevs": 2, 00:12:54.752 "num_base_bdevs_discovered": 1, 00:12:54.752 "num_base_bdevs_operational": 1, 00:12:54.752 "base_bdevs_list": [ 00:12:54.752 { 00:12:54.752 "name": null, 00:12:54.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.752 "is_configured": false, 00:12:54.752 "data_offset": 0, 00:12:54.752 "data_size": 63488 00:12:54.752 }, 00:12:54.752 { 00:12:54.752 "name": "BaseBdev2", 00:12:54.752 "uuid": "81628c65-789c-5c88-b324-3a2b2c851ca9", 00:12:54.752 "is_configured": true, 00:12:54.752 "data_offset": 2048, 00:12:54.752 "data_size": 63488 00:12:54.752 } 00:12:54.752 ] 00:12:54.752 }' 00:12:54.752 12:04:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:54.752 12:04:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.321 12:04:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:55.321 12:04:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.321 12:04:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.321 [2024-11-19 12:04:58.469249] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:55.321 [2024-11-19 12:04:58.469462] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:55.321 [2024-11-19 12:04:58.469519] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:55.321 [2024-11-19 12:04:58.469575] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:55.321 [2024-11-19 12:04:58.470248] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:55.321 [2024-11-19 12:04:58.470331] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:55.321 [2024-11-19 12:04:58.470500] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:55.321 [2024-11-19 12:04:58.470557] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:55.321 [2024-11-19 12:04:58.470612] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:55.321 [2024-11-19 12:04:58.470701] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:55.321 [2024-11-19 12:04:58.489281] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:12:55.321 spare 00:12:55.321 12:04:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.321 12:04:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:55.321 [2024-11-19 12:04:58.491603] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:56.260 12:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:56.260 12:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:56.260 12:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:56.260 12:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:56.260 12:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:56.260 12:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.260 12:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.260 12:04:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.260 12:04:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.260 12:04:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.260 12:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:56.260 "name": "raid_bdev1", 00:12:56.260 "uuid": "ccde09dd-764e-408a-bf44-d6840b685a47", 00:12:56.260 "strip_size_kb": 0, 00:12:56.260 "state": "online", 00:12:56.260 "raid_level": "raid1", 00:12:56.260 "superblock": true, 00:12:56.260 "num_base_bdevs": 2, 00:12:56.260 "num_base_bdevs_discovered": 2, 00:12:56.260 "num_base_bdevs_operational": 2, 00:12:56.260 "process": { 00:12:56.260 "type": "rebuild", 00:12:56.260 "target": "spare", 00:12:56.260 "progress": { 00:12:56.260 "blocks": 20480, 00:12:56.260 "percent": 32 00:12:56.260 } 00:12:56.260 }, 00:12:56.260 "base_bdevs_list": [ 00:12:56.260 { 00:12:56.260 "name": "spare", 00:12:56.260 "uuid": "14b9e310-efa6-50e9-93ec-0a86c5693662", 00:12:56.260 "is_configured": true, 00:12:56.260 "data_offset": 2048, 00:12:56.260 "data_size": 63488 00:12:56.260 }, 00:12:56.260 { 00:12:56.260 "name": "BaseBdev2", 00:12:56.260 "uuid": "81628c65-789c-5c88-b324-3a2b2c851ca9", 00:12:56.260 "is_configured": true, 00:12:56.260 "data_offset": 2048, 00:12:56.260 "data_size": 63488 00:12:56.260 } 00:12:56.260 ] 00:12:56.260 }' 00:12:56.260 12:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:56.260 12:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:56.260 12:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:56.260 12:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:56.260 12:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:56.260 12:04:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.260 12:04:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.260 [2024-11-19 12:04:59.627590] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:56.519 [2024-11-19 12:04:59.701123] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:56.519 [2024-11-19 12:04:59.701279] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:56.519 [2024-11-19 12:04:59.701328] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:56.519 [2024-11-19 12:04:59.701354] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:56.519 12:04:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.519 12:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:56.519 12:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:56.519 12:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:56.519 12:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:56.519 12:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:56.519 12:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:56.519 12:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.519 12:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.519 12:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.519 12:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.519 12:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.519 12:04:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.519 12:04:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.519 12:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.519 12:04:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.519 12:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.519 "name": "raid_bdev1", 00:12:56.519 "uuid": "ccde09dd-764e-408a-bf44-d6840b685a47", 00:12:56.519 "strip_size_kb": 0, 00:12:56.519 "state": "online", 00:12:56.519 "raid_level": "raid1", 00:12:56.519 "superblock": true, 00:12:56.519 "num_base_bdevs": 2, 00:12:56.519 "num_base_bdevs_discovered": 1, 00:12:56.519 "num_base_bdevs_operational": 1, 00:12:56.519 "base_bdevs_list": [ 00:12:56.519 { 00:12:56.519 "name": null, 00:12:56.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.519 "is_configured": false, 00:12:56.519 "data_offset": 0, 00:12:56.519 "data_size": 63488 00:12:56.519 }, 00:12:56.519 { 00:12:56.519 "name": "BaseBdev2", 00:12:56.519 "uuid": "81628c65-789c-5c88-b324-3a2b2c851ca9", 00:12:56.519 "is_configured": true, 00:12:56.519 "data_offset": 2048, 00:12:56.519 "data_size": 63488 00:12:56.519 } 00:12:56.519 ] 00:12:56.519 }' 00:12:56.519 12:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.519 12:04:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.088 12:05:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:57.088 12:05:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:57.088 12:05:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:57.088 12:05:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:57.088 12:05:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:57.088 12:05:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.088 12:05:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:57.088 12:05:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.088 12:05:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.088 12:05:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.088 12:05:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:57.088 "name": "raid_bdev1", 00:12:57.088 "uuid": "ccde09dd-764e-408a-bf44-d6840b685a47", 00:12:57.088 "strip_size_kb": 0, 00:12:57.088 "state": "online", 00:12:57.088 "raid_level": "raid1", 00:12:57.088 "superblock": true, 00:12:57.088 "num_base_bdevs": 2, 00:12:57.088 "num_base_bdevs_discovered": 1, 00:12:57.088 "num_base_bdevs_operational": 1, 00:12:57.088 "base_bdevs_list": [ 00:12:57.088 { 00:12:57.088 "name": null, 00:12:57.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:57.088 "is_configured": false, 00:12:57.088 "data_offset": 0, 00:12:57.088 "data_size": 63488 00:12:57.088 }, 00:12:57.088 { 00:12:57.088 "name": "BaseBdev2", 00:12:57.088 "uuid": "81628c65-789c-5c88-b324-3a2b2c851ca9", 00:12:57.088 "is_configured": true, 00:12:57.088 "data_offset": 2048, 00:12:57.088 "data_size": 63488 00:12:57.088 } 00:12:57.088 ] 00:12:57.088 }' 00:12:57.088 12:05:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:57.088 12:05:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:57.088 12:05:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:57.088 12:05:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:57.088 12:05:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:57.088 12:05:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.088 12:05:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.088 12:05:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.088 12:05:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:57.088 12:05:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.088 12:05:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.088 [2024-11-19 12:05:00.345369] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:57.088 [2024-11-19 12:05:00.345557] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:57.088 [2024-11-19 12:05:00.345600] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:57.088 [2024-11-19 12:05:00.345628] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:57.088 [2024-11-19 12:05:00.346263] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:57.088 [2024-11-19 12:05:00.346296] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:57.088 [2024-11-19 12:05:00.346419] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:57.088 [2024-11-19 12:05:00.346505] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:57.088 [2024-11-19 12:05:00.346524] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:57.088 [2024-11-19 12:05:00.346542] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:57.088 BaseBdev1 00:12:57.088 12:05:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.088 12:05:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:58.025 12:05:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:58.025 12:05:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:58.025 12:05:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:58.025 12:05:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:58.025 12:05:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:58.025 12:05:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:58.025 12:05:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:58.025 12:05:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:58.025 12:05:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:58.025 12:05:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:58.025 12:05:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.025 12:05:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.025 12:05:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.025 12:05:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.025 12:05:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.284 12:05:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:58.284 "name": "raid_bdev1", 00:12:58.284 "uuid": "ccde09dd-764e-408a-bf44-d6840b685a47", 00:12:58.284 "strip_size_kb": 0, 00:12:58.284 "state": "online", 00:12:58.284 "raid_level": "raid1", 00:12:58.284 "superblock": true, 00:12:58.284 "num_base_bdevs": 2, 00:12:58.284 "num_base_bdevs_discovered": 1, 00:12:58.284 "num_base_bdevs_operational": 1, 00:12:58.284 "base_bdevs_list": [ 00:12:58.284 { 00:12:58.284 "name": null, 00:12:58.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.284 "is_configured": false, 00:12:58.284 "data_offset": 0, 00:12:58.284 "data_size": 63488 00:12:58.284 }, 00:12:58.284 { 00:12:58.284 "name": "BaseBdev2", 00:12:58.284 "uuid": "81628c65-789c-5c88-b324-3a2b2c851ca9", 00:12:58.284 "is_configured": true, 00:12:58.284 "data_offset": 2048, 00:12:58.284 "data_size": 63488 00:12:58.284 } 00:12:58.284 ] 00:12:58.284 }' 00:12:58.284 12:05:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:58.284 12:05:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.543 12:05:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:58.543 12:05:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:58.543 12:05:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:58.543 12:05:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:58.543 12:05:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:58.543 12:05:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.543 12:05:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.543 12:05:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.543 12:05:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.543 12:05:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.543 12:05:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:58.543 "name": "raid_bdev1", 00:12:58.543 "uuid": "ccde09dd-764e-408a-bf44-d6840b685a47", 00:12:58.543 "strip_size_kb": 0, 00:12:58.543 "state": "online", 00:12:58.543 "raid_level": "raid1", 00:12:58.543 "superblock": true, 00:12:58.543 "num_base_bdevs": 2, 00:12:58.543 "num_base_bdevs_discovered": 1, 00:12:58.543 "num_base_bdevs_operational": 1, 00:12:58.543 "base_bdevs_list": [ 00:12:58.543 { 00:12:58.543 "name": null, 00:12:58.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.543 "is_configured": false, 00:12:58.543 "data_offset": 0, 00:12:58.543 "data_size": 63488 00:12:58.543 }, 00:12:58.543 { 00:12:58.543 "name": "BaseBdev2", 00:12:58.544 "uuid": "81628c65-789c-5c88-b324-3a2b2c851ca9", 00:12:58.544 "is_configured": true, 00:12:58.544 "data_offset": 2048, 00:12:58.544 "data_size": 63488 00:12:58.544 } 00:12:58.544 ] 00:12:58.544 }' 00:12:58.544 12:05:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:58.544 12:05:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:58.544 12:05:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:58.803 12:05:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:58.803 12:05:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:58.803 12:05:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:12:58.803 12:05:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:58.803 12:05:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:58.803 12:05:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:58.803 12:05:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:58.803 12:05:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:58.803 12:05:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:58.803 12:05:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.803 12:05:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.803 [2024-11-19 12:05:01.959295] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:58.803 [2024-11-19 12:05:01.959638] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:58.803 [2024-11-19 12:05:01.959712] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:58.803 request: 00:12:58.803 { 00:12:58.803 "base_bdev": "BaseBdev1", 00:12:58.803 "raid_bdev": "raid_bdev1", 00:12:58.803 "method": "bdev_raid_add_base_bdev", 00:12:58.803 "req_id": 1 00:12:58.803 } 00:12:58.803 Got JSON-RPC error response 00:12:58.803 response: 00:12:58.803 { 00:12:58.803 "code": -22, 00:12:58.803 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:58.803 } 00:12:58.803 12:05:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:58.803 12:05:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:12:58.803 12:05:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:58.803 12:05:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:58.803 12:05:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:58.803 12:05:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:59.741 12:05:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:59.741 12:05:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:59.741 12:05:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:59.741 12:05:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:59.741 12:05:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:59.741 12:05:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:59.741 12:05:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.742 12:05:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.742 12:05:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.742 12:05:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.742 12:05:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.742 12:05:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.742 12:05:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.742 12:05:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.742 12:05:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.742 12:05:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.742 "name": "raid_bdev1", 00:12:59.742 "uuid": "ccde09dd-764e-408a-bf44-d6840b685a47", 00:12:59.742 "strip_size_kb": 0, 00:12:59.742 "state": "online", 00:12:59.742 "raid_level": "raid1", 00:12:59.742 "superblock": true, 00:12:59.742 "num_base_bdevs": 2, 00:12:59.742 "num_base_bdevs_discovered": 1, 00:12:59.742 "num_base_bdevs_operational": 1, 00:12:59.742 "base_bdevs_list": [ 00:12:59.742 { 00:12:59.742 "name": null, 00:12:59.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.742 "is_configured": false, 00:12:59.742 "data_offset": 0, 00:12:59.742 "data_size": 63488 00:12:59.742 }, 00:12:59.742 { 00:12:59.742 "name": "BaseBdev2", 00:12:59.742 "uuid": "81628c65-789c-5c88-b324-3a2b2c851ca9", 00:12:59.742 "is_configured": true, 00:12:59.742 "data_offset": 2048, 00:12:59.742 "data_size": 63488 00:12:59.742 } 00:12:59.742 ] 00:12:59.742 }' 00:12:59.742 12:05:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.742 12:05:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.311 12:05:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:00.311 12:05:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:00.311 12:05:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:00.311 12:05:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:00.311 12:05:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:00.311 12:05:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.311 12:05:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.311 12:05:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.311 12:05:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.311 12:05:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.311 12:05:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:00.311 "name": "raid_bdev1", 00:13:00.311 "uuid": "ccde09dd-764e-408a-bf44-d6840b685a47", 00:13:00.311 "strip_size_kb": 0, 00:13:00.311 "state": "online", 00:13:00.311 "raid_level": "raid1", 00:13:00.311 "superblock": true, 00:13:00.311 "num_base_bdevs": 2, 00:13:00.311 "num_base_bdevs_discovered": 1, 00:13:00.311 "num_base_bdevs_operational": 1, 00:13:00.311 "base_bdevs_list": [ 00:13:00.311 { 00:13:00.311 "name": null, 00:13:00.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.311 "is_configured": false, 00:13:00.311 "data_offset": 0, 00:13:00.311 "data_size": 63488 00:13:00.311 }, 00:13:00.311 { 00:13:00.311 "name": "BaseBdev2", 00:13:00.311 "uuid": "81628c65-789c-5c88-b324-3a2b2c851ca9", 00:13:00.311 "is_configured": true, 00:13:00.311 "data_offset": 2048, 00:13:00.311 "data_size": 63488 00:13:00.311 } 00:13:00.311 ] 00:13:00.311 }' 00:13:00.311 12:05:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:00.311 12:05:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:00.311 12:05:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:00.311 12:05:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:00.311 12:05:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 75734 00:13:00.311 12:05:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 75734 ']' 00:13:00.311 12:05:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 75734 00:13:00.311 12:05:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:00.311 12:05:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:00.311 12:05:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75734 00:13:00.311 12:05:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:00.311 killing process with pid 75734 00:13:00.311 Received shutdown signal, test time was about 60.000000 seconds 00:13:00.311 00:13:00.311 Latency(us) 00:13:00.311 [2024-11-19T12:05:03.688Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:00.311 [2024-11-19T12:05:03.688Z] =================================================================================================================== 00:13:00.311 [2024-11-19T12:05:03.688Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:00.311 12:05:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:00.311 12:05:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75734' 00:13:00.311 12:05:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 75734 00:13:00.311 [2024-11-19 12:05:03.580819] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:00.311 12:05:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 75734 00:13:00.311 [2024-11-19 12:05:03.581046] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:00.311 [2024-11-19 12:05:03.581126] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:00.311 [2024-11-19 12:05:03.581143] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:00.571 [2024-11-19 12:05:03.930128] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:01.951 12:05:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:13:01.951 00:13:01.951 real 0m24.125s 00:13:01.951 user 0m28.647s 00:13:01.951 sys 0m3.982s 00:13:01.951 12:05:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:01.951 12:05:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.951 ************************************ 00:13:01.951 END TEST raid_rebuild_test_sb 00:13:01.951 ************************************ 00:13:01.951 12:05:05 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:13:01.951 12:05:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:01.951 12:05:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:01.951 12:05:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:01.951 ************************************ 00:13:01.951 START TEST raid_rebuild_test_io 00:13:01.951 ************************************ 00:13:01.951 12:05:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:13:01.951 12:05:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:01.951 12:05:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:01.951 12:05:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:01.951 12:05:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:01.951 12:05:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:01.951 12:05:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:01.951 12:05:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:01.951 12:05:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:01.951 12:05:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:01.951 12:05:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:01.951 12:05:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:01.951 12:05:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:01.951 12:05:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:01.951 12:05:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:01.951 12:05:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:01.951 12:05:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:01.951 12:05:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:01.951 12:05:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:01.951 12:05:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:01.951 12:05:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:01.951 12:05:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:01.951 12:05:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:01.951 12:05:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:01.951 12:05:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76470 00:13:01.951 12:05:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:01.951 12:05:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76470 00:13:01.951 12:05:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 76470 ']' 00:13:01.951 12:05:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:01.951 12:05:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:01.951 12:05:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:01.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:01.951 12:05:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:01.951 12:05:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.211 [2024-11-19 12:05:05.364153] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:13:02.211 [2024-11-19 12:05:05.364370] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:13:02.211 Zero copy mechanism will not be used. 00:13:02.211 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76470 ] 00:13:02.211 [2024-11-19 12:05:05.515532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:02.471 [2024-11-19 12:05:05.646549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.730 [2024-11-19 12:05:05.863203] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:02.730 [2024-11-19 12:05:05.863278] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:02.992 12:05:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:02.992 12:05:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:13:02.992 12:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:02.992 12:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:02.992 12:05:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.992 12:05:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.992 BaseBdev1_malloc 00:13:02.992 12:05:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.992 12:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:02.992 12:05:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.992 12:05:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.992 [2024-11-19 12:05:06.259562] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:02.992 [2024-11-19 12:05:06.259747] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:02.992 [2024-11-19 12:05:06.259800] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:02.992 [2024-11-19 12:05:06.259845] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:02.992 [2024-11-19 12:05:06.262329] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:02.992 [2024-11-19 12:05:06.262422] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:02.992 BaseBdev1 00:13:02.992 12:05:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.992 12:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:02.992 12:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:02.992 12:05:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.992 12:05:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.992 BaseBdev2_malloc 00:13:02.992 12:05:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.992 12:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:02.992 12:05:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.992 12:05:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.992 [2024-11-19 12:05:06.318317] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:02.992 [2024-11-19 12:05:06.318458] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:02.992 [2024-11-19 12:05:06.318502] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:02.992 [2024-11-19 12:05:06.318549] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:02.992 [2024-11-19 12:05:06.321034] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:02.992 [2024-11-19 12:05:06.321121] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:02.992 BaseBdev2 00:13:02.992 12:05:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.992 12:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:02.992 12:05:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.992 12:05:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.268 spare_malloc 00:13:03.268 12:05:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.268 12:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:03.268 12:05:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.268 12:05:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.268 spare_delay 00:13:03.268 12:05:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.268 12:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:03.268 12:05:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.268 12:05:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.268 [2024-11-19 12:05:06.426587] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:03.268 [2024-11-19 12:05:06.426717] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:03.268 [2024-11-19 12:05:06.426744] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:03.268 [2024-11-19 12:05:06.426759] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:03.268 [2024-11-19 12:05:06.429233] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:03.268 [2024-11-19 12:05:06.429281] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:03.268 spare 00:13:03.268 12:05:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.268 12:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:03.268 12:05:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.268 12:05:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.268 [2024-11-19 12:05:06.438633] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:03.268 [2024-11-19 12:05:06.440770] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:03.268 [2024-11-19 12:05:06.440923] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:03.268 [2024-11-19 12:05:06.440963] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:03.268 [2024-11-19 12:05:06.441279] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:03.268 [2024-11-19 12:05:06.441505] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:03.268 [2024-11-19 12:05:06.441556] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:03.268 [2024-11-19 12:05:06.441772] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:03.268 12:05:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.268 12:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:03.268 12:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:03.268 12:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:03.268 12:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:03.268 12:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:03.268 12:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:03.268 12:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:03.268 12:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:03.268 12:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:03.268 12:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:03.268 12:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.268 12:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.268 12:05:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.268 12:05:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.268 12:05:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.268 12:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:03.268 "name": "raid_bdev1", 00:13:03.268 "uuid": "6a059c6e-66cd-44c4-bfbe-d7069bf7d82a", 00:13:03.268 "strip_size_kb": 0, 00:13:03.268 "state": "online", 00:13:03.268 "raid_level": "raid1", 00:13:03.268 "superblock": false, 00:13:03.268 "num_base_bdevs": 2, 00:13:03.268 "num_base_bdevs_discovered": 2, 00:13:03.268 "num_base_bdevs_operational": 2, 00:13:03.268 "base_bdevs_list": [ 00:13:03.268 { 00:13:03.268 "name": "BaseBdev1", 00:13:03.268 "uuid": "9c1e1221-23cc-5180-9e3d-782d7674f572", 00:13:03.268 "is_configured": true, 00:13:03.268 "data_offset": 0, 00:13:03.268 "data_size": 65536 00:13:03.268 }, 00:13:03.268 { 00:13:03.268 "name": "BaseBdev2", 00:13:03.268 "uuid": "c7f6ce30-729e-5609-94fe-9a1a7f04b8e1", 00:13:03.268 "is_configured": true, 00:13:03.268 "data_offset": 0, 00:13:03.268 "data_size": 65536 00:13:03.268 } 00:13:03.268 ] 00:13:03.268 }' 00:13:03.268 12:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:03.268 12:05:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.528 12:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:03.528 12:05:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.528 12:05:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.528 12:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:03.528 [2024-11-19 12:05:06.854597] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:03.528 12:05:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.528 12:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:03.528 12:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.528 12:05:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.528 12:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:03.528 12:05:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.788 12:05:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.788 12:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:03.788 12:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:03.788 12:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:03.788 12:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:03.788 12:05:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.788 12:05:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.788 [2024-11-19 12:05:06.953981] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:03.788 12:05:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.788 12:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:03.788 12:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:03.788 12:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:03.788 12:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:03.788 12:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:03.788 12:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:03.788 12:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:03.788 12:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:03.788 12:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:03.788 12:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:03.788 12:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.788 12:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.788 12:05:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.788 12:05:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.788 12:05:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.788 12:05:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:03.788 "name": "raid_bdev1", 00:13:03.788 "uuid": "6a059c6e-66cd-44c4-bfbe-d7069bf7d82a", 00:13:03.788 "strip_size_kb": 0, 00:13:03.788 "state": "online", 00:13:03.788 "raid_level": "raid1", 00:13:03.788 "superblock": false, 00:13:03.788 "num_base_bdevs": 2, 00:13:03.788 "num_base_bdevs_discovered": 1, 00:13:03.788 "num_base_bdevs_operational": 1, 00:13:03.788 "base_bdevs_list": [ 00:13:03.788 { 00:13:03.788 "name": null, 00:13:03.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.788 "is_configured": false, 00:13:03.788 "data_offset": 0, 00:13:03.788 "data_size": 65536 00:13:03.788 }, 00:13:03.788 { 00:13:03.788 "name": "BaseBdev2", 00:13:03.788 "uuid": "c7f6ce30-729e-5609-94fe-9a1a7f04b8e1", 00:13:03.788 "is_configured": true, 00:13:03.788 "data_offset": 0, 00:13:03.788 "data_size": 65536 00:13:03.788 } 00:13:03.788 ] 00:13:03.788 }' 00:13:03.788 12:05:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:03.788 12:05:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.788 [2024-11-19 12:05:07.058717] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:03.788 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:03.788 Zero copy mechanism will not be used. 00:13:03.788 Running I/O for 60 seconds... 00:13:04.048 12:05:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:04.048 12:05:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.048 12:05:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.048 [2024-11-19 12:05:07.349696] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:04.048 12:05:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.048 12:05:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:04.048 [2024-11-19 12:05:07.412175] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:04.048 [2024-11-19 12:05:07.414541] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:04.306 [2024-11-19 12:05:07.536392] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:04.306 [2024-11-19 12:05:07.536915] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:04.564 [2024-11-19 12:05:07.779022] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:05.082 136.00 IOPS, 408.00 MiB/s [2024-11-19T12:05:08.459Z] [2024-11-19 12:05:08.271965] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:05.082 [2024-11-19 12:05:08.272529] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:05.082 12:05:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:05.082 12:05:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:05.082 12:05:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:05.082 12:05:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:05.082 12:05:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:05.082 12:05:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.082 12:05:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.082 12:05:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.082 12:05:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.082 12:05:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.082 12:05:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:05.082 "name": "raid_bdev1", 00:13:05.082 "uuid": "6a059c6e-66cd-44c4-bfbe-d7069bf7d82a", 00:13:05.082 "strip_size_kb": 0, 00:13:05.082 "state": "online", 00:13:05.082 "raid_level": "raid1", 00:13:05.082 "superblock": false, 00:13:05.082 "num_base_bdevs": 2, 00:13:05.082 "num_base_bdevs_discovered": 2, 00:13:05.083 "num_base_bdevs_operational": 2, 00:13:05.083 "process": { 00:13:05.083 "type": "rebuild", 00:13:05.083 "target": "spare", 00:13:05.083 "progress": { 00:13:05.083 "blocks": 10240, 00:13:05.083 "percent": 15 00:13:05.083 } 00:13:05.083 }, 00:13:05.083 "base_bdevs_list": [ 00:13:05.083 { 00:13:05.083 "name": "spare", 00:13:05.083 "uuid": "aca08df6-de08-5530-abdf-c251c6928412", 00:13:05.083 "is_configured": true, 00:13:05.083 "data_offset": 0, 00:13:05.083 "data_size": 65536 00:13:05.083 }, 00:13:05.083 { 00:13:05.083 "name": "BaseBdev2", 00:13:05.083 "uuid": "c7f6ce30-729e-5609-94fe-9a1a7f04b8e1", 00:13:05.083 "is_configured": true, 00:13:05.083 "data_offset": 0, 00:13:05.083 "data_size": 65536 00:13:05.083 } 00:13:05.083 ] 00:13:05.083 }' 00:13:05.083 12:05:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:05.343 12:05:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:05.343 12:05:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:05.343 12:05:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:05.343 12:05:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:05.343 12:05:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.343 12:05:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.343 [2024-11-19 12:05:08.518848] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:05.602 [2024-11-19 12:05:08.732570] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:05.602 [2024-11-19 12:05:08.741062] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:05.602 [2024-11-19 12:05:08.741149] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:05.602 [2024-11-19 12:05:08.741165] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:05.602 [2024-11-19 12:05:08.793256] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:13:05.602 12:05:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.602 12:05:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:05.602 12:05:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:05.602 12:05:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:05.602 12:05:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:05.602 12:05:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:05.602 12:05:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:05.602 12:05:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:05.602 12:05:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:05.602 12:05:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:05.602 12:05:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:05.602 12:05:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.602 12:05:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.602 12:05:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.602 12:05:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.602 12:05:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.602 12:05:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:05.602 "name": "raid_bdev1", 00:13:05.602 "uuid": "6a059c6e-66cd-44c4-bfbe-d7069bf7d82a", 00:13:05.602 "strip_size_kb": 0, 00:13:05.602 "state": "online", 00:13:05.602 "raid_level": "raid1", 00:13:05.602 "superblock": false, 00:13:05.602 "num_base_bdevs": 2, 00:13:05.602 "num_base_bdevs_discovered": 1, 00:13:05.602 "num_base_bdevs_operational": 1, 00:13:05.602 "base_bdevs_list": [ 00:13:05.602 { 00:13:05.602 "name": null, 00:13:05.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.602 "is_configured": false, 00:13:05.602 "data_offset": 0, 00:13:05.602 "data_size": 65536 00:13:05.602 }, 00:13:05.602 { 00:13:05.602 "name": "BaseBdev2", 00:13:05.602 "uuid": "c7f6ce30-729e-5609-94fe-9a1a7f04b8e1", 00:13:05.602 "is_configured": true, 00:13:05.602 "data_offset": 0, 00:13:05.602 "data_size": 65536 00:13:05.602 } 00:13:05.602 ] 00:13:05.602 }' 00:13:05.602 12:05:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:05.602 12:05:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.119 135.00 IOPS, 405.00 MiB/s [2024-11-19T12:05:09.496Z] 12:05:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:06.119 12:05:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:06.119 12:05:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:06.119 12:05:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:06.119 12:05:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:06.119 12:05:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.119 12:05:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.119 12:05:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.119 12:05:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.119 12:05:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.119 12:05:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:06.119 "name": "raid_bdev1", 00:13:06.119 "uuid": "6a059c6e-66cd-44c4-bfbe-d7069bf7d82a", 00:13:06.119 "strip_size_kb": 0, 00:13:06.119 "state": "online", 00:13:06.119 "raid_level": "raid1", 00:13:06.119 "superblock": false, 00:13:06.119 "num_base_bdevs": 2, 00:13:06.119 "num_base_bdevs_discovered": 1, 00:13:06.119 "num_base_bdevs_operational": 1, 00:13:06.119 "base_bdevs_list": [ 00:13:06.119 { 00:13:06.119 "name": null, 00:13:06.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.119 "is_configured": false, 00:13:06.119 "data_offset": 0, 00:13:06.119 "data_size": 65536 00:13:06.119 }, 00:13:06.119 { 00:13:06.119 "name": "BaseBdev2", 00:13:06.119 "uuid": "c7f6ce30-729e-5609-94fe-9a1a7f04b8e1", 00:13:06.119 "is_configured": true, 00:13:06.119 "data_offset": 0, 00:13:06.119 "data_size": 65536 00:13:06.119 } 00:13:06.119 ] 00:13:06.119 }' 00:13:06.119 12:05:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:06.119 12:05:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:06.119 12:05:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:06.119 12:05:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:06.119 12:05:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:06.119 12:05:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.119 12:05:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.119 [2024-11-19 12:05:09.406931] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:06.119 12:05:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.119 12:05:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:06.119 [2024-11-19 12:05:09.462401] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:06.119 [2024-11-19 12:05:09.464791] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:06.378 [2024-11-19 12:05:09.579232] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:06.378 [2024-11-19 12:05:09.580134] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:06.637 [2024-11-19 12:05:09.810514] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:06.637 [2024-11-19 12:05:09.810987] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:06.895 135.33 IOPS, 406.00 MiB/s [2024-11-19T12:05:10.272Z] [2024-11-19 12:05:10.157755] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:06.895 [2024-11-19 12:05:10.158670] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:07.154 [2024-11-19 12:05:10.274807] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:07.154 [2024-11-19 12:05:10.275229] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:07.154 12:05:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:07.154 12:05:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:07.154 12:05:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:07.154 12:05:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:07.154 12:05:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:07.154 12:05:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.154 12:05:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.154 12:05:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.154 12:05:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:07.154 12:05:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.154 12:05:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:07.154 "name": "raid_bdev1", 00:13:07.154 "uuid": "6a059c6e-66cd-44c4-bfbe-d7069bf7d82a", 00:13:07.154 "strip_size_kb": 0, 00:13:07.154 "state": "online", 00:13:07.154 "raid_level": "raid1", 00:13:07.154 "superblock": false, 00:13:07.154 "num_base_bdevs": 2, 00:13:07.154 "num_base_bdevs_discovered": 2, 00:13:07.154 "num_base_bdevs_operational": 2, 00:13:07.154 "process": { 00:13:07.154 "type": "rebuild", 00:13:07.154 "target": "spare", 00:13:07.154 "progress": { 00:13:07.154 "blocks": 10240, 00:13:07.154 "percent": 15 00:13:07.154 } 00:13:07.154 }, 00:13:07.154 "base_bdevs_list": [ 00:13:07.154 { 00:13:07.154 "name": "spare", 00:13:07.154 "uuid": "aca08df6-de08-5530-abdf-c251c6928412", 00:13:07.154 "is_configured": true, 00:13:07.154 "data_offset": 0, 00:13:07.154 "data_size": 65536 00:13:07.154 }, 00:13:07.154 { 00:13:07.154 "name": "BaseBdev2", 00:13:07.154 "uuid": "c7f6ce30-729e-5609-94fe-9a1a7f04b8e1", 00:13:07.154 "is_configured": true, 00:13:07.154 "data_offset": 0, 00:13:07.154 "data_size": 65536 00:13:07.154 } 00:13:07.154 ] 00:13:07.154 }' 00:13:07.154 12:05:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:07.413 12:05:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:07.413 12:05:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:07.413 12:05:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:07.413 12:05:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:07.413 12:05:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:07.413 12:05:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:07.413 12:05:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:07.413 12:05:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=403 00:13:07.413 12:05:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:07.413 12:05:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:07.413 12:05:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:07.413 12:05:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:07.413 12:05:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:07.413 12:05:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:07.413 12:05:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.413 12:05:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.413 12:05:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.413 12:05:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:07.413 12:05:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.413 [2024-11-19 12:05:10.608492] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:07.413 12:05:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:07.413 "name": "raid_bdev1", 00:13:07.413 "uuid": "6a059c6e-66cd-44c4-bfbe-d7069bf7d82a", 00:13:07.413 "strip_size_kb": 0, 00:13:07.413 "state": "online", 00:13:07.413 "raid_level": "raid1", 00:13:07.413 "superblock": false, 00:13:07.413 "num_base_bdevs": 2, 00:13:07.413 "num_base_bdevs_discovered": 2, 00:13:07.413 "num_base_bdevs_operational": 2, 00:13:07.413 "process": { 00:13:07.413 "type": "rebuild", 00:13:07.413 "target": "spare", 00:13:07.413 "progress": { 00:13:07.413 "blocks": 12288, 00:13:07.413 "percent": 18 00:13:07.413 } 00:13:07.413 }, 00:13:07.413 "base_bdevs_list": [ 00:13:07.413 { 00:13:07.413 "name": "spare", 00:13:07.413 "uuid": "aca08df6-de08-5530-abdf-c251c6928412", 00:13:07.414 "is_configured": true, 00:13:07.414 "data_offset": 0, 00:13:07.414 "data_size": 65536 00:13:07.414 }, 00:13:07.414 { 00:13:07.414 "name": "BaseBdev2", 00:13:07.414 "uuid": "c7f6ce30-729e-5609-94fe-9a1a7f04b8e1", 00:13:07.414 "is_configured": true, 00:13:07.414 "data_offset": 0, 00:13:07.414 "data_size": 65536 00:13:07.414 } 00:13:07.414 ] 00:13:07.414 }' 00:13:07.414 12:05:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:07.414 12:05:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:07.414 12:05:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:07.414 12:05:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:07.414 12:05:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:07.673 [2024-11-19 12:05:10.838745] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:07.933 123.00 IOPS, 369.00 MiB/s [2024-11-19T12:05:11.310Z] [2024-11-19 12:05:11.079744] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:07.933 [2024-11-19 12:05:11.080410] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:08.192 [2024-11-19 12:05:11.537886] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:08.450 12:05:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:08.450 12:05:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:08.450 12:05:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:08.450 12:05:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:08.450 12:05:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:08.450 12:05:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:08.450 12:05:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.450 12:05:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.450 12:05:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.450 12:05:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.450 12:05:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.450 12:05:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:08.450 "name": "raid_bdev1", 00:13:08.450 "uuid": "6a059c6e-66cd-44c4-bfbe-d7069bf7d82a", 00:13:08.450 "strip_size_kb": 0, 00:13:08.450 "state": "online", 00:13:08.450 "raid_level": "raid1", 00:13:08.450 "superblock": false, 00:13:08.450 "num_base_bdevs": 2, 00:13:08.450 "num_base_bdevs_discovered": 2, 00:13:08.450 "num_base_bdevs_operational": 2, 00:13:08.450 "process": { 00:13:08.450 "type": "rebuild", 00:13:08.450 "target": "spare", 00:13:08.450 "progress": { 00:13:08.450 "blocks": 28672, 00:13:08.450 "percent": 43 00:13:08.450 } 00:13:08.450 }, 00:13:08.450 "base_bdevs_list": [ 00:13:08.450 { 00:13:08.450 "name": "spare", 00:13:08.450 "uuid": "aca08df6-de08-5530-abdf-c251c6928412", 00:13:08.450 "is_configured": true, 00:13:08.450 "data_offset": 0, 00:13:08.450 "data_size": 65536 00:13:08.450 }, 00:13:08.450 { 00:13:08.450 "name": "BaseBdev2", 00:13:08.450 "uuid": "c7f6ce30-729e-5609-94fe-9a1a7f04b8e1", 00:13:08.450 "is_configured": true, 00:13:08.450 "data_offset": 0, 00:13:08.450 "data_size": 65536 00:13:08.450 } 00:13:08.450 ] 00:13:08.450 }' 00:13:08.450 12:05:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:08.450 12:05:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:08.450 12:05:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:08.708 [2024-11-19 12:05:11.859942] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:13:08.708 [2024-11-19 12:05:11.860802] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:13:08.708 12:05:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:08.708 12:05:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:09.643 112.00 IOPS, 336.00 MiB/s [2024-11-19T12:05:13.020Z] [2024-11-19 12:05:12.742146] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:13:09.643 12:05:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:09.643 12:05:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:09.643 12:05:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:09.643 12:05:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:09.643 12:05:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:09.643 12:05:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:09.643 12:05:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.643 12:05:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.643 12:05:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.643 12:05:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.643 12:05:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.643 12:05:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:09.643 "name": "raid_bdev1", 00:13:09.643 "uuid": "6a059c6e-66cd-44c4-bfbe-d7069bf7d82a", 00:13:09.643 "strip_size_kb": 0, 00:13:09.643 "state": "online", 00:13:09.643 "raid_level": "raid1", 00:13:09.643 "superblock": false, 00:13:09.643 "num_base_bdevs": 2, 00:13:09.643 "num_base_bdevs_discovered": 2, 00:13:09.643 "num_base_bdevs_operational": 2, 00:13:09.643 "process": { 00:13:09.643 "type": "rebuild", 00:13:09.643 "target": "spare", 00:13:09.643 "progress": { 00:13:09.643 "blocks": 47104, 00:13:09.643 "percent": 71 00:13:09.643 } 00:13:09.643 }, 00:13:09.643 "base_bdevs_list": [ 00:13:09.643 { 00:13:09.643 "name": "spare", 00:13:09.643 "uuid": "aca08df6-de08-5530-abdf-c251c6928412", 00:13:09.643 "is_configured": true, 00:13:09.643 "data_offset": 0, 00:13:09.643 "data_size": 65536 00:13:09.643 }, 00:13:09.643 { 00:13:09.643 "name": "BaseBdev2", 00:13:09.643 "uuid": "c7f6ce30-729e-5609-94fe-9a1a7f04b8e1", 00:13:09.643 "is_configured": true, 00:13:09.643 "data_offset": 0, 00:13:09.643 "data_size": 65536 00:13:09.643 } 00:13:09.643 ] 00:13:09.643 }' 00:13:09.643 12:05:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:09.643 12:05:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:09.643 12:05:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:09.901 12:05:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:09.902 12:05:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:09.902 100.17 IOPS, 300.50 MiB/s [2024-11-19T12:05:13.279Z] [2024-11-19 12:05:13.075048] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:13:10.160 [2024-11-19 12:05:13.405221] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:13:10.771 [2024-11-19 12:05:13.939197] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:10.771 12:05:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:10.771 12:05:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:10.771 12:05:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:10.771 12:05:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:10.771 12:05:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:10.771 12:05:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:10.771 12:05:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.771 12:05:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.771 12:05:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.771 12:05:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:10.771 [2024-11-19 12:05:14.039046] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:10.771 [2024-11-19 12:05:14.042725] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:10.771 91.00 IOPS, 273.00 MiB/s [2024-11-19T12:05:14.148Z] 12:05:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.771 12:05:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:10.771 "name": "raid_bdev1", 00:13:10.771 "uuid": "6a059c6e-66cd-44c4-bfbe-d7069bf7d82a", 00:13:10.771 "strip_size_kb": 0, 00:13:10.771 "state": "online", 00:13:10.771 "raid_level": "raid1", 00:13:10.771 "superblock": false, 00:13:10.771 "num_base_bdevs": 2, 00:13:10.771 "num_base_bdevs_discovered": 2, 00:13:10.771 "num_base_bdevs_operational": 2, 00:13:10.771 "base_bdevs_list": [ 00:13:10.771 { 00:13:10.771 "name": "spare", 00:13:10.771 "uuid": "aca08df6-de08-5530-abdf-c251c6928412", 00:13:10.771 "is_configured": true, 00:13:10.771 "data_offset": 0, 00:13:10.771 "data_size": 65536 00:13:10.771 }, 00:13:10.771 { 00:13:10.771 "name": "BaseBdev2", 00:13:10.771 "uuid": "c7f6ce30-729e-5609-94fe-9a1a7f04b8e1", 00:13:10.771 "is_configured": true, 00:13:10.771 "data_offset": 0, 00:13:10.771 "data_size": 65536 00:13:10.771 } 00:13:10.771 ] 00:13:10.771 }' 00:13:10.771 12:05:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:10.771 12:05:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:11.030 12:05:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:11.030 12:05:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:11.030 12:05:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:13:11.030 12:05:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:11.030 12:05:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:11.030 12:05:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:11.030 12:05:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:11.030 12:05:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:11.030 12:05:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.030 12:05:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.030 12:05:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.030 12:05:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.031 12:05:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.031 12:05:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:11.031 "name": "raid_bdev1", 00:13:11.031 "uuid": "6a059c6e-66cd-44c4-bfbe-d7069bf7d82a", 00:13:11.031 "strip_size_kb": 0, 00:13:11.031 "state": "online", 00:13:11.031 "raid_level": "raid1", 00:13:11.031 "superblock": false, 00:13:11.031 "num_base_bdevs": 2, 00:13:11.031 "num_base_bdevs_discovered": 2, 00:13:11.031 "num_base_bdevs_operational": 2, 00:13:11.031 "base_bdevs_list": [ 00:13:11.031 { 00:13:11.031 "name": "spare", 00:13:11.031 "uuid": "aca08df6-de08-5530-abdf-c251c6928412", 00:13:11.031 "is_configured": true, 00:13:11.031 "data_offset": 0, 00:13:11.031 "data_size": 65536 00:13:11.031 }, 00:13:11.031 { 00:13:11.031 "name": "BaseBdev2", 00:13:11.031 "uuid": "c7f6ce30-729e-5609-94fe-9a1a7f04b8e1", 00:13:11.031 "is_configured": true, 00:13:11.031 "data_offset": 0, 00:13:11.031 "data_size": 65536 00:13:11.031 } 00:13:11.031 ] 00:13:11.031 }' 00:13:11.031 12:05:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:11.031 12:05:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:11.031 12:05:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:11.031 12:05:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:11.031 12:05:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:11.031 12:05:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:11.031 12:05:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:11.031 12:05:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:11.031 12:05:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:11.031 12:05:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:11.031 12:05:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.031 12:05:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.031 12:05:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.031 12:05:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.031 12:05:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.031 12:05:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.031 12:05:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.031 12:05:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.031 12:05:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.031 12:05:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.031 "name": "raid_bdev1", 00:13:11.031 "uuid": "6a059c6e-66cd-44c4-bfbe-d7069bf7d82a", 00:13:11.031 "strip_size_kb": 0, 00:13:11.031 "state": "online", 00:13:11.031 "raid_level": "raid1", 00:13:11.031 "superblock": false, 00:13:11.031 "num_base_bdevs": 2, 00:13:11.031 "num_base_bdevs_discovered": 2, 00:13:11.031 "num_base_bdevs_operational": 2, 00:13:11.031 "base_bdevs_list": [ 00:13:11.031 { 00:13:11.031 "name": "spare", 00:13:11.031 "uuid": "aca08df6-de08-5530-abdf-c251c6928412", 00:13:11.031 "is_configured": true, 00:13:11.031 "data_offset": 0, 00:13:11.031 "data_size": 65536 00:13:11.031 }, 00:13:11.031 { 00:13:11.031 "name": "BaseBdev2", 00:13:11.031 "uuid": "c7f6ce30-729e-5609-94fe-9a1a7f04b8e1", 00:13:11.031 "is_configured": true, 00:13:11.031 "data_offset": 0, 00:13:11.031 "data_size": 65536 00:13:11.031 } 00:13:11.031 ] 00:13:11.031 }' 00:13:11.031 12:05:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.031 12:05:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.598 12:05:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:11.598 12:05:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.598 12:05:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.598 [2024-11-19 12:05:14.800760] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:11.598 [2024-11-19 12:05:14.800909] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:11.598 00:13:11.598 Latency(us) 00:13:11.598 [2024-11-19T12:05:14.975Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:11.598 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:11.598 raid_bdev1 : 7.85 84.96 254.89 0.00 0.00 16520.45 338.05 111268.11 00:13:11.598 [2024-11-19T12:05:14.975Z] =================================================================================================================== 00:13:11.598 [2024-11-19T12:05:14.975Z] Total : 84.96 254.89 0.00 0.00 16520.45 338.05 111268.11 00:13:11.598 [2024-11-19 12:05:14.919277] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:11.598 [2024-11-19 12:05:14.919419] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:11.598 [2024-11-19 12:05:14.919526] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:11.598 [2024-11-19 12:05:14.919547] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:11.598 { 00:13:11.598 "results": [ 00:13:11.598 { 00:13:11.598 "job": "raid_bdev1", 00:13:11.599 "core_mask": "0x1", 00:13:11.599 "workload": "randrw", 00:13:11.599 "percentage": 50, 00:13:11.599 "status": "finished", 00:13:11.599 "queue_depth": 2, 00:13:11.599 "io_size": 3145728, 00:13:11.599 "runtime": 7.850328, 00:13:11.599 "iops": 84.96460275290409, 00:13:11.599 "mibps": 254.89380825871226, 00:13:11.599 "io_failed": 0, 00:13:11.599 "io_timeout": 0, 00:13:11.599 "avg_latency_us": 16520.446627341353, 00:13:11.599 "min_latency_us": 338.05414847161575, 00:13:11.599 "max_latency_us": 111268.10829694323 00:13:11.599 } 00:13:11.599 ], 00:13:11.599 "core_count": 1 00:13:11.599 } 00:13:11.599 12:05:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.599 12:05:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.599 12:05:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:11.599 12:05:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.599 12:05:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.599 12:05:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.857 12:05:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:11.857 12:05:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:11.857 12:05:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:11.857 12:05:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:11.857 12:05:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:11.857 12:05:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:11.857 12:05:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:11.857 12:05:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:11.857 12:05:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:11.857 12:05:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:11.857 12:05:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:11.857 12:05:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:11.857 12:05:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:11.857 /dev/nbd0 00:13:11.857 12:05:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:11.857 12:05:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:11.857 12:05:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:11.857 12:05:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:13:11.857 12:05:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:11.857 12:05:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:11.857 12:05:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:11.858 12:05:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:13:11.858 12:05:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:11.858 12:05:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:11.858 12:05:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:11.858 1+0 records in 00:13:11.858 1+0 records out 00:13:11.858 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000421863 s, 9.7 MB/s 00:13:11.858 12:05:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:11.858 12:05:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:13:11.858 12:05:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:12.117 12:05:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:12.117 12:05:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:13:12.117 12:05:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:12.117 12:05:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:12.117 12:05:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:12.117 12:05:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:13:12.117 12:05:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:13:12.117 12:05:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:12.117 12:05:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:13:12.117 12:05:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:12.117 12:05:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:12.117 12:05:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:12.117 12:05:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:12.117 12:05:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:12.117 12:05:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:12.117 12:05:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:13:12.117 /dev/nbd1 00:13:12.117 12:05:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:12.117 12:05:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:12.117 12:05:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:12.117 12:05:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:13:12.117 12:05:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:12.117 12:05:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:12.117 12:05:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:12.117 12:05:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:13:12.117 12:05:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:12.117 12:05:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:12.117 12:05:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:12.117 1+0 records in 00:13:12.117 1+0 records out 00:13:12.117 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000307477 s, 13.3 MB/s 00:13:12.117 12:05:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:12.117 12:05:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:13:12.117 12:05:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:12.117 12:05:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:12.117 12:05:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:13:12.117 12:05:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:12.117 12:05:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:12.117 12:05:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:12.375 12:05:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:12.375 12:05:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:12.375 12:05:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:12.375 12:05:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:12.375 12:05:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:12.375 12:05:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:12.375 12:05:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:12.635 12:05:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:12.635 12:05:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:12.635 12:05:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:12.635 12:05:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:12.635 12:05:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:12.635 12:05:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:12.635 12:05:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:12.635 12:05:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:12.635 12:05:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:12.635 12:05:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:12.635 12:05:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:12.635 12:05:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:12.635 12:05:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:12.635 12:05:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:12.635 12:05:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:12.894 12:05:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:12.894 12:05:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:12.894 12:05:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:12.894 12:05:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:12.894 12:05:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:12.894 12:05:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:12.894 12:05:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:12.894 12:05:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:12.894 12:05:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:12.894 12:05:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76470 00:13:12.894 12:05:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 76470 ']' 00:13:12.894 12:05:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 76470 00:13:12.894 12:05:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:13:12.894 12:05:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:12.894 12:05:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76470 00:13:12.894 12:05:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:12.894 12:05:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:12.894 12:05:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76470' 00:13:12.894 killing process with pid 76470 00:13:12.894 12:05:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 76470 00:13:12.894 Received shutdown signal, test time was about 9.085314 seconds 00:13:12.894 00:13:12.894 Latency(us) 00:13:12.894 [2024-11-19T12:05:16.271Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:12.894 [2024-11-19T12:05:16.271Z] =================================================================================================================== 00:13:12.894 [2024-11-19T12:05:16.271Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:12.894 [2024-11-19 12:05:16.129128] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:12.894 12:05:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 76470 00:13:13.154 [2024-11-19 12:05:16.350912] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:14.538 12:05:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:14.538 00:13:14.538 real 0m12.209s 00:13:14.538 user 0m15.203s 00:13:14.538 sys 0m1.566s 00:13:14.538 12:05:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:14.538 12:05:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:14.538 ************************************ 00:13:14.538 END TEST raid_rebuild_test_io 00:13:14.538 ************************************ 00:13:14.538 12:05:17 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:13:14.538 12:05:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:14.538 12:05:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:14.538 12:05:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:14.538 ************************************ 00:13:14.538 START TEST raid_rebuild_test_sb_io 00:13:14.538 ************************************ 00:13:14.538 12:05:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:13:14.538 12:05:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:14.538 12:05:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:14.538 12:05:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:14.538 12:05:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:14.538 12:05:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:14.538 12:05:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:14.538 12:05:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:14.538 12:05:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:14.538 12:05:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:14.538 12:05:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:14.538 12:05:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:14.538 12:05:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:14.538 12:05:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:14.538 12:05:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:14.538 12:05:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:14.538 12:05:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:14.538 12:05:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:14.538 12:05:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:14.538 12:05:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:14.538 12:05:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:14.538 12:05:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:14.538 12:05:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:14.538 12:05:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:14.538 12:05:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:14.538 12:05:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76846 00:13:14.538 12:05:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:14.538 12:05:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76846 00:13:14.538 12:05:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 76846 ']' 00:13:14.538 12:05:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:14.538 12:05:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:14.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:14.538 12:05:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:14.538 12:05:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:14.538 12:05:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:14.538 [2024-11-19 12:05:17.639390] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:13:14.538 [2024-11-19 12:05:17.639581] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:13:14.538 Zero copy mechanism will not be used. 00:13:14.539 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76846 ] 00:13:14.539 [2024-11-19 12:05:17.812600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:14.798 [2024-11-19 12:05:17.931573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:14.798 [2024-11-19 12:05:18.138565] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:14.798 [2024-11-19 12:05:18.138596] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:15.367 12:05:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:15.367 12:05:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:13:15.367 12:05:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:15.367 12:05:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:15.367 12:05:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.367 12:05:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.367 BaseBdev1_malloc 00:13:15.367 12:05:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.367 12:05:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:15.367 12:05:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.367 12:05:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.367 [2024-11-19 12:05:18.506728] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:15.367 [2024-11-19 12:05:18.506867] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:15.367 [2024-11-19 12:05:18.506921] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:15.367 [2024-11-19 12:05:18.506956] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:15.367 [2024-11-19 12:05:18.509071] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:15.367 [2024-11-19 12:05:18.509141] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:15.367 BaseBdev1 00:13:15.367 12:05:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.367 12:05:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:15.367 12:05:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:15.367 12:05:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.367 12:05:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.367 BaseBdev2_malloc 00:13:15.367 12:05:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.367 12:05:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:15.367 12:05:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.367 12:05:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.367 [2024-11-19 12:05:18.556072] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:15.367 [2024-11-19 12:05:18.556190] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:15.367 [2024-11-19 12:05:18.556214] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:15.367 [2024-11-19 12:05:18.556227] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:15.367 [2024-11-19 12:05:18.558291] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:15.367 [2024-11-19 12:05:18.558362] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:15.367 BaseBdev2 00:13:15.367 12:05:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.367 12:05:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:15.367 12:05:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.367 12:05:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.367 spare_malloc 00:13:15.367 12:05:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.367 12:05:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:15.367 12:05:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.367 12:05:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.367 spare_delay 00:13:15.367 12:05:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.367 12:05:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:15.367 12:05:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.367 12:05:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.367 [2024-11-19 12:05:18.632277] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:15.367 [2024-11-19 12:05:18.632407] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:15.367 [2024-11-19 12:05:18.632446] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:15.367 [2024-11-19 12:05:18.632490] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:15.367 [2024-11-19 12:05:18.634667] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:15.367 [2024-11-19 12:05:18.634756] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:15.367 spare 00:13:15.367 12:05:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.367 12:05:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:15.367 12:05:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.367 12:05:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.368 [2024-11-19 12:05:18.644321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:15.368 [2024-11-19 12:05:18.646114] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:15.368 [2024-11-19 12:05:18.646273] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:15.368 [2024-11-19 12:05:18.646291] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:15.368 [2024-11-19 12:05:18.646527] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:15.368 [2024-11-19 12:05:18.646708] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:15.368 [2024-11-19 12:05:18.646717] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:15.368 [2024-11-19 12:05:18.646858] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:15.368 12:05:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.368 12:05:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:15.368 12:05:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:15.368 12:05:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:15.368 12:05:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:15.368 12:05:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:15.368 12:05:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:15.368 12:05:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:15.368 12:05:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:15.368 12:05:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:15.368 12:05:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:15.368 12:05:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.368 12:05:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:15.368 12:05:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.368 12:05:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.368 12:05:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.368 12:05:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:15.368 "name": "raid_bdev1", 00:13:15.368 "uuid": "1b6385ef-8e81-4e11-a01c-a4b35dc0ef7e", 00:13:15.368 "strip_size_kb": 0, 00:13:15.368 "state": "online", 00:13:15.368 "raid_level": "raid1", 00:13:15.368 "superblock": true, 00:13:15.368 "num_base_bdevs": 2, 00:13:15.368 "num_base_bdevs_discovered": 2, 00:13:15.368 "num_base_bdevs_operational": 2, 00:13:15.368 "base_bdevs_list": [ 00:13:15.368 { 00:13:15.368 "name": "BaseBdev1", 00:13:15.368 "uuid": "fb41eb71-ddbe-5157-ac2b-1a62dc1066dc", 00:13:15.368 "is_configured": true, 00:13:15.368 "data_offset": 2048, 00:13:15.368 "data_size": 63488 00:13:15.368 }, 00:13:15.368 { 00:13:15.368 "name": "BaseBdev2", 00:13:15.368 "uuid": "c66fd461-1324-584f-9136-380247018b86", 00:13:15.368 "is_configured": true, 00:13:15.368 "data_offset": 2048, 00:13:15.368 "data_size": 63488 00:13:15.368 } 00:13:15.368 ] 00:13:15.368 }' 00:13:15.368 12:05:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:15.368 12:05:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.937 12:05:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:15.937 12:05:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:15.937 12:05:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.937 12:05:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.937 [2024-11-19 12:05:19.123778] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:15.937 12:05:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.937 12:05:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:15.937 12:05:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:15.937 12:05:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.937 12:05:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.937 12:05:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.937 12:05:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.937 12:05:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:15.937 12:05:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:15.937 12:05:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:15.937 12:05:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.937 12:05:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.937 12:05:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:15.937 [2024-11-19 12:05:19.215325] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:15.937 12:05:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.937 12:05:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:15.937 12:05:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:15.937 12:05:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:15.937 12:05:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:15.937 12:05:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:15.937 12:05:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:15.937 12:05:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:15.937 12:05:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:15.937 12:05:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:15.937 12:05:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:15.937 12:05:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.937 12:05:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.937 12:05:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.937 12:05:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:15.937 12:05:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.937 12:05:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:15.937 "name": "raid_bdev1", 00:13:15.937 "uuid": "1b6385ef-8e81-4e11-a01c-a4b35dc0ef7e", 00:13:15.937 "strip_size_kb": 0, 00:13:15.937 "state": "online", 00:13:15.937 "raid_level": "raid1", 00:13:15.937 "superblock": true, 00:13:15.937 "num_base_bdevs": 2, 00:13:15.937 "num_base_bdevs_discovered": 1, 00:13:15.937 "num_base_bdevs_operational": 1, 00:13:15.937 "base_bdevs_list": [ 00:13:15.937 { 00:13:15.937 "name": null, 00:13:15.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.937 "is_configured": false, 00:13:15.937 "data_offset": 0, 00:13:15.937 "data_size": 63488 00:13:15.937 }, 00:13:15.937 { 00:13:15.937 "name": "BaseBdev2", 00:13:15.937 "uuid": "c66fd461-1324-584f-9136-380247018b86", 00:13:15.937 "is_configured": true, 00:13:15.937 "data_offset": 2048, 00:13:15.937 "data_size": 63488 00:13:15.937 } 00:13:15.937 ] 00:13:15.937 }' 00:13:15.937 12:05:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:15.937 12:05:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.198 [2024-11-19 12:05:19.319656] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:16.198 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:16.198 Zero copy mechanism will not be used. 00:13:16.198 Running I/O for 60 seconds... 00:13:16.458 12:05:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:16.458 12:05:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.458 12:05:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.458 [2024-11-19 12:05:19.676503] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:16.458 12:05:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.458 12:05:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:16.458 [2024-11-19 12:05:19.729260] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:16.458 [2024-11-19 12:05:19.731323] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:16.718 [2024-11-19 12:05:19.846107] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:16.718 [2024-11-19 12:05:19.846677] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:16.718 [2024-11-19 12:05:20.061926] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:16.718 [2024-11-19 12:05:20.062280] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:16.977 [2024-11-19 12:05:20.304496] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:17.237 191.00 IOPS, 573.00 MiB/s [2024-11-19T12:05:20.614Z] [2024-11-19 12:05:20.518393] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:17.237 [2024-11-19 12:05:20.518729] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:17.496 12:05:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:17.496 12:05:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:17.496 12:05:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:17.496 12:05:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:17.496 12:05:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:17.496 12:05:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.496 12:05:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.496 12:05:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.496 12:05:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:17.496 12:05:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.496 12:05:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:17.496 "name": "raid_bdev1", 00:13:17.496 "uuid": "1b6385ef-8e81-4e11-a01c-a4b35dc0ef7e", 00:13:17.496 "strip_size_kb": 0, 00:13:17.496 "state": "online", 00:13:17.496 "raid_level": "raid1", 00:13:17.496 "superblock": true, 00:13:17.496 "num_base_bdevs": 2, 00:13:17.496 "num_base_bdevs_discovered": 2, 00:13:17.496 "num_base_bdevs_operational": 2, 00:13:17.496 "process": { 00:13:17.496 "type": "rebuild", 00:13:17.496 "target": "spare", 00:13:17.496 "progress": { 00:13:17.496 "blocks": 10240, 00:13:17.496 "percent": 16 00:13:17.496 } 00:13:17.496 }, 00:13:17.496 "base_bdevs_list": [ 00:13:17.496 { 00:13:17.496 "name": "spare", 00:13:17.496 "uuid": "b2ba0505-156d-557b-a45b-938c07586a2c", 00:13:17.496 "is_configured": true, 00:13:17.496 "data_offset": 2048, 00:13:17.496 "data_size": 63488 00:13:17.496 }, 00:13:17.496 { 00:13:17.496 "name": "BaseBdev2", 00:13:17.497 "uuid": "c66fd461-1324-584f-9136-380247018b86", 00:13:17.497 "is_configured": true, 00:13:17.497 "data_offset": 2048, 00:13:17.497 "data_size": 63488 00:13:17.497 } 00:13:17.497 ] 00:13:17.497 }' 00:13:17.497 12:05:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:17.497 12:05:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:17.497 12:05:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:17.497 12:05:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:17.497 12:05:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:17.497 12:05:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.497 12:05:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:17.497 [2024-11-19 12:05:20.841750] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:17.756 [2024-11-19 12:05:20.941757] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:17.756 [2024-11-19 12:05:20.944182] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:17.756 [2024-11-19 12:05:20.944277] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:17.757 [2024-11-19 12:05:20.944294] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:17.757 [2024-11-19 12:05:20.994561] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:13:17.757 12:05:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.757 12:05:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:17.757 12:05:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:17.757 12:05:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:17.757 12:05:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:17.757 12:05:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:17.757 12:05:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:17.757 12:05:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.757 12:05:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.757 12:05:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.757 12:05:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.757 12:05:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.757 12:05:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.757 12:05:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.757 12:05:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:17.757 12:05:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.757 12:05:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.757 "name": "raid_bdev1", 00:13:17.757 "uuid": "1b6385ef-8e81-4e11-a01c-a4b35dc0ef7e", 00:13:17.757 "strip_size_kb": 0, 00:13:17.757 "state": "online", 00:13:17.757 "raid_level": "raid1", 00:13:17.757 "superblock": true, 00:13:17.757 "num_base_bdevs": 2, 00:13:17.757 "num_base_bdevs_discovered": 1, 00:13:17.757 "num_base_bdevs_operational": 1, 00:13:17.757 "base_bdevs_list": [ 00:13:17.757 { 00:13:17.757 "name": null, 00:13:17.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.757 "is_configured": false, 00:13:17.757 "data_offset": 0, 00:13:17.757 "data_size": 63488 00:13:17.757 }, 00:13:17.757 { 00:13:17.757 "name": "BaseBdev2", 00:13:17.757 "uuid": "c66fd461-1324-584f-9136-380247018b86", 00:13:17.757 "is_configured": true, 00:13:17.757 "data_offset": 2048, 00:13:17.757 "data_size": 63488 00:13:17.757 } 00:13:17.757 ] 00:13:17.757 }' 00:13:17.757 12:05:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.757 12:05:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:18.276 159.00 IOPS, 477.00 MiB/s [2024-11-19T12:05:21.653Z] 12:05:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:18.276 12:05:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:18.276 12:05:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:18.276 12:05:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:18.276 12:05:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:18.276 12:05:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.276 12:05:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.276 12:05:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.276 12:05:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:18.276 12:05:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.276 12:05:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:18.276 "name": "raid_bdev1", 00:13:18.276 "uuid": "1b6385ef-8e81-4e11-a01c-a4b35dc0ef7e", 00:13:18.276 "strip_size_kb": 0, 00:13:18.276 "state": "online", 00:13:18.276 "raid_level": "raid1", 00:13:18.276 "superblock": true, 00:13:18.276 "num_base_bdevs": 2, 00:13:18.276 "num_base_bdevs_discovered": 1, 00:13:18.276 "num_base_bdevs_operational": 1, 00:13:18.276 "base_bdevs_list": [ 00:13:18.276 { 00:13:18.276 "name": null, 00:13:18.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.276 "is_configured": false, 00:13:18.276 "data_offset": 0, 00:13:18.276 "data_size": 63488 00:13:18.276 }, 00:13:18.276 { 00:13:18.276 "name": "BaseBdev2", 00:13:18.276 "uuid": "c66fd461-1324-584f-9136-380247018b86", 00:13:18.276 "is_configured": true, 00:13:18.276 "data_offset": 2048, 00:13:18.276 "data_size": 63488 00:13:18.276 } 00:13:18.276 ] 00:13:18.276 }' 00:13:18.276 12:05:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:18.276 12:05:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:18.276 12:05:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:18.276 12:05:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:18.276 12:05:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:18.276 12:05:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.276 12:05:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:18.276 [2024-11-19 12:05:21.606910] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:18.536 12:05:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.536 12:05:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:18.536 [2024-11-19 12:05:21.680609] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:18.536 [2024-11-19 12:05:21.682584] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:18.536 [2024-11-19 12:05:21.790755] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:18.536 [2024-11-19 12:05:21.791378] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:18.795 [2024-11-19 12:05:22.005684] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:18.795 [2024-11-19 12:05:22.006032] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:19.312 146.00 IOPS, 438.00 MiB/s [2024-11-19T12:05:22.689Z] [2024-11-19 12:05:22.445420] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:19.312 [2024-11-19 12:05:22.445740] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:19.312 12:05:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:19.312 12:05:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:19.312 12:05:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:19.312 12:05:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:19.312 12:05:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:19.312 12:05:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.312 12:05:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.312 12:05:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.312 12:05:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.576 12:05:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.576 12:05:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:19.576 "name": "raid_bdev1", 00:13:19.576 "uuid": "1b6385ef-8e81-4e11-a01c-a4b35dc0ef7e", 00:13:19.576 "strip_size_kb": 0, 00:13:19.576 "state": "online", 00:13:19.576 "raid_level": "raid1", 00:13:19.576 "superblock": true, 00:13:19.576 "num_base_bdevs": 2, 00:13:19.576 "num_base_bdevs_discovered": 2, 00:13:19.576 "num_base_bdevs_operational": 2, 00:13:19.576 "process": { 00:13:19.576 "type": "rebuild", 00:13:19.576 "target": "spare", 00:13:19.576 "progress": { 00:13:19.576 "blocks": 12288, 00:13:19.576 "percent": 19 00:13:19.576 } 00:13:19.576 }, 00:13:19.576 "base_bdevs_list": [ 00:13:19.576 { 00:13:19.576 "name": "spare", 00:13:19.576 "uuid": "b2ba0505-156d-557b-a45b-938c07586a2c", 00:13:19.576 "is_configured": true, 00:13:19.576 "data_offset": 2048, 00:13:19.576 "data_size": 63488 00:13:19.576 }, 00:13:19.576 { 00:13:19.576 "name": "BaseBdev2", 00:13:19.576 "uuid": "c66fd461-1324-584f-9136-380247018b86", 00:13:19.576 "is_configured": true, 00:13:19.576 "data_offset": 2048, 00:13:19.576 "data_size": 63488 00:13:19.576 } 00:13:19.576 ] 00:13:19.576 }' 00:13:19.576 12:05:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:19.576 12:05:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:19.576 12:05:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:19.576 [2024-11-19 12:05:22.763854] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:19.576 12:05:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:19.576 12:05:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:19.576 12:05:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:19.576 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:19.576 12:05:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:19.576 12:05:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:19.576 12:05:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:19.576 12:05:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=415 00:13:19.576 12:05:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:19.576 12:05:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:19.576 12:05:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:19.577 12:05:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:19.577 12:05:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:19.577 12:05:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:19.577 12:05:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.577 12:05:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.577 12:05:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.577 12:05:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.577 12:05:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.577 12:05:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:19.577 "name": "raid_bdev1", 00:13:19.577 "uuid": "1b6385ef-8e81-4e11-a01c-a4b35dc0ef7e", 00:13:19.577 "strip_size_kb": 0, 00:13:19.577 "state": "online", 00:13:19.577 "raid_level": "raid1", 00:13:19.577 "superblock": true, 00:13:19.577 "num_base_bdevs": 2, 00:13:19.577 "num_base_bdevs_discovered": 2, 00:13:19.577 "num_base_bdevs_operational": 2, 00:13:19.577 "process": { 00:13:19.577 "type": "rebuild", 00:13:19.577 "target": "spare", 00:13:19.577 "progress": { 00:13:19.577 "blocks": 14336, 00:13:19.577 "percent": 22 00:13:19.577 } 00:13:19.577 }, 00:13:19.577 "base_bdevs_list": [ 00:13:19.577 { 00:13:19.577 "name": "spare", 00:13:19.577 "uuid": "b2ba0505-156d-557b-a45b-938c07586a2c", 00:13:19.577 "is_configured": true, 00:13:19.577 "data_offset": 2048, 00:13:19.577 "data_size": 63488 00:13:19.577 }, 00:13:19.577 { 00:13:19.577 "name": "BaseBdev2", 00:13:19.577 "uuid": "c66fd461-1324-584f-9136-380247018b86", 00:13:19.577 "is_configured": true, 00:13:19.577 "data_offset": 2048, 00:13:19.577 "data_size": 63488 00:13:19.577 } 00:13:19.577 ] 00:13:19.577 }' 00:13:19.577 12:05:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:19.577 [2024-11-19 12:05:22.873134] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:19.577 12:05:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:19.577 12:05:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:19.845 12:05:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:19.845 12:05:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:19.845 [2024-11-19 12:05:23.084518] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:20.105 [2024-11-19 12:05:23.305693] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:20.105 [2024-11-19 12:05:23.306099] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:20.673 136.25 IOPS, 408.75 MiB/s [2024-11-19T12:05:24.050Z] 12:05:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:20.673 12:05:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:20.673 12:05:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:20.674 12:05:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:20.674 12:05:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:20.674 12:05:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:20.674 12:05:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.674 12:05:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.674 12:05:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.674 12:05:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.674 12:05:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.674 12:05:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:20.674 "name": "raid_bdev1", 00:13:20.674 "uuid": "1b6385ef-8e81-4e11-a01c-a4b35dc0ef7e", 00:13:20.674 "strip_size_kb": 0, 00:13:20.674 "state": "online", 00:13:20.674 "raid_level": "raid1", 00:13:20.674 "superblock": true, 00:13:20.674 "num_base_bdevs": 2, 00:13:20.674 "num_base_bdevs_discovered": 2, 00:13:20.674 "num_base_bdevs_operational": 2, 00:13:20.674 "process": { 00:13:20.674 "type": "rebuild", 00:13:20.674 "target": "spare", 00:13:20.674 "progress": { 00:13:20.674 "blocks": 30720, 00:13:20.674 "percent": 48 00:13:20.674 } 00:13:20.674 }, 00:13:20.674 "base_bdevs_list": [ 00:13:20.674 { 00:13:20.674 "name": "spare", 00:13:20.674 "uuid": "b2ba0505-156d-557b-a45b-938c07586a2c", 00:13:20.674 "is_configured": true, 00:13:20.674 "data_offset": 2048, 00:13:20.674 "data_size": 63488 00:13:20.674 }, 00:13:20.674 { 00:13:20.674 "name": "BaseBdev2", 00:13:20.674 "uuid": "c66fd461-1324-584f-9136-380247018b86", 00:13:20.674 "is_configured": true, 00:13:20.674 "data_offset": 2048, 00:13:20.674 "data_size": 63488 00:13:20.674 } 00:13:20.674 ] 00:13:20.674 }' 00:13:20.674 12:05:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:20.933 12:05:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:20.933 12:05:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:20.933 12:05:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:20.933 12:05:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:21.451 123.00 IOPS, 369.00 MiB/s [2024-11-19T12:05:24.828Z] [2024-11-19 12:05:24.703896] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:13:21.710 [2024-11-19 12:05:24.918671] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:13:21.969 12:05:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:21.969 12:05:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:21.969 12:05:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:21.969 12:05:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:21.969 12:05:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:21.969 12:05:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:21.969 12:05:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.969 12:05:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.969 12:05:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.969 12:05:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.969 12:05:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.969 12:05:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:21.969 "name": "raid_bdev1", 00:13:21.969 "uuid": "1b6385ef-8e81-4e11-a01c-a4b35dc0ef7e", 00:13:21.969 "strip_size_kb": 0, 00:13:21.969 "state": "online", 00:13:21.969 "raid_level": "raid1", 00:13:21.969 "superblock": true, 00:13:21.969 "num_base_bdevs": 2, 00:13:21.969 "num_base_bdevs_discovered": 2, 00:13:21.969 "num_base_bdevs_operational": 2, 00:13:21.969 "process": { 00:13:21.969 "type": "rebuild", 00:13:21.969 "target": "spare", 00:13:21.969 "progress": { 00:13:21.969 "blocks": 49152, 00:13:21.969 "percent": 77 00:13:21.969 } 00:13:21.969 }, 00:13:21.969 "base_bdevs_list": [ 00:13:21.969 { 00:13:21.969 "name": "spare", 00:13:21.969 "uuid": "b2ba0505-156d-557b-a45b-938c07586a2c", 00:13:21.969 "is_configured": true, 00:13:21.969 "data_offset": 2048, 00:13:21.969 "data_size": 63488 00:13:21.969 }, 00:13:21.969 { 00:13:21.969 "name": "BaseBdev2", 00:13:21.969 "uuid": "c66fd461-1324-584f-9136-380247018b86", 00:13:21.969 "is_configured": true, 00:13:21.969 "data_offset": 2048, 00:13:21.969 "data_size": 63488 00:13:21.969 } 00:13:21.969 ] 00:13:21.969 }' 00:13:21.969 12:05:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:21.969 12:05:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:21.969 12:05:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:21.969 12:05:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:21.969 12:05:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:21.969 [2024-11-19 12:05:25.247474] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:13:22.228 109.33 IOPS, 328.00 MiB/s [2024-11-19T12:05:25.605Z] [2024-11-19 12:05:25.452613] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:13:22.228 [2024-11-19 12:05:25.453104] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:13:22.798 [2024-11-19 12:05:25.880620] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:22.798 [2024-11-19 12:05:25.986398] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:22.798 [2024-11-19 12:05:25.989481] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:23.058 12:05:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:23.058 12:05:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:23.058 12:05:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:23.058 12:05:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:23.058 12:05:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:23.058 12:05:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:23.058 12:05:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.058 12:05:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.058 12:05:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.058 12:05:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.058 12:05:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.058 12:05:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:23.058 "name": "raid_bdev1", 00:13:23.058 "uuid": "1b6385ef-8e81-4e11-a01c-a4b35dc0ef7e", 00:13:23.058 "strip_size_kb": 0, 00:13:23.058 "state": "online", 00:13:23.058 "raid_level": "raid1", 00:13:23.058 "superblock": true, 00:13:23.058 "num_base_bdevs": 2, 00:13:23.058 "num_base_bdevs_discovered": 2, 00:13:23.058 "num_base_bdevs_operational": 2, 00:13:23.058 "base_bdevs_list": [ 00:13:23.058 { 00:13:23.058 "name": "spare", 00:13:23.058 "uuid": "b2ba0505-156d-557b-a45b-938c07586a2c", 00:13:23.058 "is_configured": true, 00:13:23.058 "data_offset": 2048, 00:13:23.058 "data_size": 63488 00:13:23.058 }, 00:13:23.058 { 00:13:23.058 "name": "BaseBdev2", 00:13:23.058 "uuid": "c66fd461-1324-584f-9136-380247018b86", 00:13:23.058 "is_configured": true, 00:13:23.058 "data_offset": 2048, 00:13:23.058 "data_size": 63488 00:13:23.058 } 00:13:23.058 ] 00:13:23.058 }' 00:13:23.058 12:05:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:23.058 100.00 IOPS, 300.00 MiB/s [2024-11-19T12:05:26.435Z] 12:05:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:23.058 12:05:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:23.058 12:05:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:23.058 12:05:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:13:23.058 12:05:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:23.058 12:05:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:23.058 12:05:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:23.058 12:05:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:23.058 12:05:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:23.058 12:05:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.058 12:05:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.058 12:05:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.058 12:05:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.058 12:05:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.058 12:05:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:23.058 "name": "raid_bdev1", 00:13:23.058 "uuid": "1b6385ef-8e81-4e11-a01c-a4b35dc0ef7e", 00:13:23.058 "strip_size_kb": 0, 00:13:23.058 "state": "online", 00:13:23.058 "raid_level": "raid1", 00:13:23.058 "superblock": true, 00:13:23.058 "num_base_bdevs": 2, 00:13:23.059 "num_base_bdevs_discovered": 2, 00:13:23.059 "num_base_bdevs_operational": 2, 00:13:23.059 "base_bdevs_list": [ 00:13:23.059 { 00:13:23.059 "name": "spare", 00:13:23.059 "uuid": "b2ba0505-156d-557b-a45b-938c07586a2c", 00:13:23.059 "is_configured": true, 00:13:23.059 "data_offset": 2048, 00:13:23.059 "data_size": 63488 00:13:23.059 }, 00:13:23.059 { 00:13:23.059 "name": "BaseBdev2", 00:13:23.059 "uuid": "c66fd461-1324-584f-9136-380247018b86", 00:13:23.059 "is_configured": true, 00:13:23.059 "data_offset": 2048, 00:13:23.059 "data_size": 63488 00:13:23.059 } 00:13:23.059 ] 00:13:23.059 }' 00:13:23.318 12:05:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:23.318 12:05:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:23.318 12:05:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:23.318 12:05:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:23.318 12:05:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:23.318 12:05:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:23.318 12:05:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:23.318 12:05:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:23.319 12:05:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:23.319 12:05:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:23.319 12:05:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.319 12:05:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.319 12:05:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.319 12:05:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.319 12:05:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.319 12:05:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.319 12:05:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.319 12:05:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.319 12:05:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.319 12:05:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.319 "name": "raid_bdev1", 00:13:23.319 "uuid": "1b6385ef-8e81-4e11-a01c-a4b35dc0ef7e", 00:13:23.319 "strip_size_kb": 0, 00:13:23.319 "state": "online", 00:13:23.319 "raid_level": "raid1", 00:13:23.319 "superblock": true, 00:13:23.319 "num_base_bdevs": 2, 00:13:23.319 "num_base_bdevs_discovered": 2, 00:13:23.319 "num_base_bdevs_operational": 2, 00:13:23.319 "base_bdevs_list": [ 00:13:23.319 { 00:13:23.319 "name": "spare", 00:13:23.319 "uuid": "b2ba0505-156d-557b-a45b-938c07586a2c", 00:13:23.319 "is_configured": true, 00:13:23.319 "data_offset": 2048, 00:13:23.319 "data_size": 63488 00:13:23.319 }, 00:13:23.319 { 00:13:23.319 "name": "BaseBdev2", 00:13:23.319 "uuid": "c66fd461-1324-584f-9136-380247018b86", 00:13:23.319 "is_configured": true, 00:13:23.319 "data_offset": 2048, 00:13:23.319 "data_size": 63488 00:13:23.319 } 00:13:23.319 ] 00:13:23.319 }' 00:13:23.319 12:05:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.319 12:05:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.888 12:05:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:23.888 12:05:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.888 12:05:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.888 [2024-11-19 12:05:26.965309] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:23.888 [2024-11-19 12:05:26.965412] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:23.888 00:13:23.888 Latency(us) 00:13:23.888 [2024-11-19T12:05:27.265Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:23.888 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:23.888 raid_bdev1 : 7.76 93.84 281.51 0.00 0.00 14281.46 302.28 113557.58 00:13:23.888 [2024-11-19T12:05:27.265Z] =================================================================================================================== 00:13:23.888 [2024-11-19T12:05:27.265Z] Total : 93.84 281.51 0.00 0.00 14281.46 302.28 113557.58 00:13:23.888 [2024-11-19 12:05:27.087216] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:23.888 [2024-11-19 12:05:27.087315] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:23.888 [2024-11-19 12:05:27.087412] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:23.888 [2024-11-19 12:05:27.087479] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:23.888 { 00:13:23.888 "results": [ 00:13:23.888 { 00:13:23.888 "job": "raid_bdev1", 00:13:23.888 "core_mask": "0x1", 00:13:23.888 "workload": "randrw", 00:13:23.888 "percentage": 50, 00:13:23.888 "status": "finished", 00:13:23.888 "queue_depth": 2, 00:13:23.888 "io_size": 3145728, 00:13:23.888 "runtime": 7.758207, 00:13:23.888 "iops": 93.83611445273372, 00:13:23.888 "mibps": 281.50834335820116, 00:13:23.888 "io_failed": 0, 00:13:23.888 "io_timeout": 0, 00:13:23.888 "avg_latency_us": 14281.462258265752, 00:13:23.888 "min_latency_us": 302.2812227074236, 00:13:23.888 "max_latency_us": 113557.57554585153 00:13:23.888 } 00:13:23.888 ], 00:13:23.888 "core_count": 1 00:13:23.888 } 00:13:23.888 12:05:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.888 12:05:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.888 12:05:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:23.888 12:05:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.888 12:05:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.888 12:05:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.888 12:05:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:23.888 12:05:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:23.888 12:05:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:23.888 12:05:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:23.888 12:05:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:23.888 12:05:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:23.888 12:05:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:23.888 12:05:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:23.888 12:05:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:23.888 12:05:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:23.888 12:05:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:23.888 12:05:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:23.888 12:05:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:24.148 /dev/nbd0 00:13:24.148 12:05:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:24.148 12:05:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:24.148 12:05:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:24.148 12:05:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:13:24.148 12:05:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:24.148 12:05:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:24.148 12:05:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:24.148 12:05:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:13:24.148 12:05:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:24.148 12:05:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:24.148 12:05:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:24.148 1+0 records in 00:13:24.148 1+0 records out 00:13:24.148 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000452411 s, 9.1 MB/s 00:13:24.148 12:05:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.148 12:05:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:13:24.148 12:05:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.148 12:05:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:24.148 12:05:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:13:24.148 12:05:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:24.148 12:05:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:24.148 12:05:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:24.148 12:05:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:13:24.148 12:05:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:13:24.148 12:05:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:24.148 12:05:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:13:24.148 12:05:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:24.148 12:05:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:24.148 12:05:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:24.148 12:05:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:24.148 12:05:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:24.148 12:05:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:24.148 12:05:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:13:24.407 /dev/nbd1 00:13:24.407 12:05:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:24.407 12:05:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:24.407 12:05:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:24.407 12:05:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:13:24.407 12:05:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:24.407 12:05:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:24.407 12:05:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:24.407 12:05:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:13:24.407 12:05:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:24.407 12:05:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:24.407 12:05:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:24.407 1+0 records in 00:13:24.407 1+0 records out 00:13:24.407 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00026986 s, 15.2 MB/s 00:13:24.407 12:05:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.407 12:05:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:13:24.407 12:05:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.407 12:05:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:24.407 12:05:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:13:24.407 12:05:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:24.407 12:05:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:24.407 12:05:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:24.666 12:05:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:24.666 12:05:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:24.666 12:05:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:24.666 12:05:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:24.666 12:05:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:24.666 12:05:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:24.666 12:05:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:24.666 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:24.666 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:24.666 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:24.666 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:24.666 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:24.666 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:24.666 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:24.666 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:24.666 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:24.666 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:24.666 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:24.666 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:24.666 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:24.666 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:24.666 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:24.925 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:24.925 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:24.925 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:24.925 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:24.925 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:24.925 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:24.925 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:24.925 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:24.925 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:24.925 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:24.925 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.925 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:24.925 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.925 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:24.925 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.925 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:24.925 [2024-11-19 12:05:28.252421] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:24.925 [2024-11-19 12:05:28.252531] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:24.925 [2024-11-19 12:05:28.252571] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:13:24.925 [2024-11-19 12:05:28.252602] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:24.925 [2024-11-19 12:05:28.254847] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:24.925 [2024-11-19 12:05:28.254922] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:24.925 [2024-11-19 12:05:28.255045] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:24.925 [2024-11-19 12:05:28.255147] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:24.925 [2024-11-19 12:05:28.255326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:24.925 spare 00:13:24.925 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.925 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:24.925 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.925 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.184 [2024-11-19 12:05:28.355272] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:25.184 [2024-11-19 12:05:28.355369] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:25.184 [2024-11-19 12:05:28.355710] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:13:25.184 [2024-11-19 12:05:28.355904] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:25.184 [2024-11-19 12:05:28.355922] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:25.184 [2024-11-19 12:05:28.356150] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:25.184 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.184 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:25.184 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:25.184 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:25.184 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:25.184 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:25.184 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:25.184 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:25.184 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:25.184 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:25.184 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:25.184 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.184 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.184 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.184 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.184 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.184 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:25.184 "name": "raid_bdev1", 00:13:25.184 "uuid": "1b6385ef-8e81-4e11-a01c-a4b35dc0ef7e", 00:13:25.184 "strip_size_kb": 0, 00:13:25.184 "state": "online", 00:13:25.184 "raid_level": "raid1", 00:13:25.184 "superblock": true, 00:13:25.184 "num_base_bdevs": 2, 00:13:25.184 "num_base_bdevs_discovered": 2, 00:13:25.184 "num_base_bdevs_operational": 2, 00:13:25.184 "base_bdevs_list": [ 00:13:25.184 { 00:13:25.184 "name": "spare", 00:13:25.184 "uuid": "b2ba0505-156d-557b-a45b-938c07586a2c", 00:13:25.184 "is_configured": true, 00:13:25.184 "data_offset": 2048, 00:13:25.184 "data_size": 63488 00:13:25.184 }, 00:13:25.184 { 00:13:25.184 "name": "BaseBdev2", 00:13:25.184 "uuid": "c66fd461-1324-584f-9136-380247018b86", 00:13:25.184 "is_configured": true, 00:13:25.184 "data_offset": 2048, 00:13:25.184 "data_size": 63488 00:13:25.184 } 00:13:25.184 ] 00:13:25.184 }' 00:13:25.184 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:25.184 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.752 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:25.752 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:25.752 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:25.752 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:25.752 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:25.752 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.752 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.752 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.752 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.752 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.752 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:25.752 "name": "raid_bdev1", 00:13:25.752 "uuid": "1b6385ef-8e81-4e11-a01c-a4b35dc0ef7e", 00:13:25.752 "strip_size_kb": 0, 00:13:25.752 "state": "online", 00:13:25.752 "raid_level": "raid1", 00:13:25.752 "superblock": true, 00:13:25.752 "num_base_bdevs": 2, 00:13:25.752 "num_base_bdevs_discovered": 2, 00:13:25.752 "num_base_bdevs_operational": 2, 00:13:25.752 "base_bdevs_list": [ 00:13:25.752 { 00:13:25.752 "name": "spare", 00:13:25.752 "uuid": "b2ba0505-156d-557b-a45b-938c07586a2c", 00:13:25.752 "is_configured": true, 00:13:25.752 "data_offset": 2048, 00:13:25.752 "data_size": 63488 00:13:25.752 }, 00:13:25.752 { 00:13:25.752 "name": "BaseBdev2", 00:13:25.752 "uuid": "c66fd461-1324-584f-9136-380247018b86", 00:13:25.752 "is_configured": true, 00:13:25.752 "data_offset": 2048, 00:13:25.752 "data_size": 63488 00:13:25.752 } 00:13:25.752 ] 00:13:25.752 }' 00:13:25.752 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:25.752 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:25.752 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:25.752 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:25.752 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.752 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.752 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.752 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:25.752 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.752 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:25.752 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:25.752 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.752 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.752 [2024-11-19 12:05:28.995315] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:25.752 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.752 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:25.752 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:25.752 12:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:25.752 12:05:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:25.752 12:05:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:25.752 12:05:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:25.752 12:05:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:25.752 12:05:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:25.752 12:05:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:25.752 12:05:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:25.752 12:05:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.752 12:05:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.752 12:05:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.752 12:05:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.752 12:05:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.752 12:05:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:25.752 "name": "raid_bdev1", 00:13:25.752 "uuid": "1b6385ef-8e81-4e11-a01c-a4b35dc0ef7e", 00:13:25.752 "strip_size_kb": 0, 00:13:25.752 "state": "online", 00:13:25.752 "raid_level": "raid1", 00:13:25.752 "superblock": true, 00:13:25.752 "num_base_bdevs": 2, 00:13:25.752 "num_base_bdevs_discovered": 1, 00:13:25.752 "num_base_bdevs_operational": 1, 00:13:25.752 "base_bdevs_list": [ 00:13:25.752 { 00:13:25.752 "name": null, 00:13:25.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.752 "is_configured": false, 00:13:25.752 "data_offset": 0, 00:13:25.752 "data_size": 63488 00:13:25.752 }, 00:13:25.752 { 00:13:25.752 "name": "BaseBdev2", 00:13:25.752 "uuid": "c66fd461-1324-584f-9136-380247018b86", 00:13:25.752 "is_configured": true, 00:13:25.752 "data_offset": 2048, 00:13:25.752 "data_size": 63488 00:13:25.752 } 00:13:25.752 ] 00:13:25.752 }' 00:13:25.752 12:05:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:25.752 12:05:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.318 12:05:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:26.318 12:05:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.318 12:05:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.318 [2024-11-19 12:05:29.422695] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:26.318 [2024-11-19 12:05:29.422968] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:26.318 [2024-11-19 12:05:29.423049] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:26.318 [2024-11-19 12:05:29.423140] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:26.318 [2024-11-19 12:05:29.439833] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:13:26.318 12:05:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.318 12:05:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:26.318 [2024-11-19 12:05:29.441753] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:27.259 12:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:27.259 12:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:27.259 12:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:27.259 12:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:27.259 12:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:27.259 12:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.259 12:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.259 12:05:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.259 12:05:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.259 12:05:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.259 12:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:27.259 "name": "raid_bdev1", 00:13:27.259 "uuid": "1b6385ef-8e81-4e11-a01c-a4b35dc0ef7e", 00:13:27.259 "strip_size_kb": 0, 00:13:27.259 "state": "online", 00:13:27.259 "raid_level": "raid1", 00:13:27.259 "superblock": true, 00:13:27.259 "num_base_bdevs": 2, 00:13:27.259 "num_base_bdevs_discovered": 2, 00:13:27.259 "num_base_bdevs_operational": 2, 00:13:27.259 "process": { 00:13:27.259 "type": "rebuild", 00:13:27.259 "target": "spare", 00:13:27.259 "progress": { 00:13:27.259 "blocks": 20480, 00:13:27.259 "percent": 32 00:13:27.259 } 00:13:27.259 }, 00:13:27.259 "base_bdevs_list": [ 00:13:27.259 { 00:13:27.259 "name": "spare", 00:13:27.259 "uuid": "b2ba0505-156d-557b-a45b-938c07586a2c", 00:13:27.259 "is_configured": true, 00:13:27.259 "data_offset": 2048, 00:13:27.259 "data_size": 63488 00:13:27.259 }, 00:13:27.259 { 00:13:27.259 "name": "BaseBdev2", 00:13:27.259 "uuid": "c66fd461-1324-584f-9136-380247018b86", 00:13:27.259 "is_configured": true, 00:13:27.259 "data_offset": 2048, 00:13:27.259 "data_size": 63488 00:13:27.259 } 00:13:27.259 ] 00:13:27.259 }' 00:13:27.259 12:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:27.259 12:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:27.259 12:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:27.259 12:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:27.259 12:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:27.259 12:05:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.259 12:05:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.259 [2024-11-19 12:05:30.605459] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:27.528 [2024-11-19 12:05:30.647185] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:27.528 [2024-11-19 12:05:30.647301] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:27.528 [2024-11-19 12:05:30.647340] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:27.528 [2024-11-19 12:05:30.647378] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:27.528 12:05:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.528 12:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:27.528 12:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:27.528 12:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:27.528 12:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:27.528 12:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:27.528 12:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:27.528 12:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.528 12:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.528 12:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.528 12:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.528 12:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.528 12:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.528 12:05:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.528 12:05:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.528 12:05:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.528 12:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.528 "name": "raid_bdev1", 00:13:27.528 "uuid": "1b6385ef-8e81-4e11-a01c-a4b35dc0ef7e", 00:13:27.528 "strip_size_kb": 0, 00:13:27.528 "state": "online", 00:13:27.528 "raid_level": "raid1", 00:13:27.528 "superblock": true, 00:13:27.528 "num_base_bdevs": 2, 00:13:27.528 "num_base_bdevs_discovered": 1, 00:13:27.528 "num_base_bdevs_operational": 1, 00:13:27.528 "base_bdevs_list": [ 00:13:27.528 { 00:13:27.528 "name": null, 00:13:27.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.528 "is_configured": false, 00:13:27.528 "data_offset": 0, 00:13:27.528 "data_size": 63488 00:13:27.528 }, 00:13:27.528 { 00:13:27.528 "name": "BaseBdev2", 00:13:27.528 "uuid": "c66fd461-1324-584f-9136-380247018b86", 00:13:27.528 "is_configured": true, 00:13:27.528 "data_offset": 2048, 00:13:27.528 "data_size": 63488 00:13:27.528 } 00:13:27.528 ] 00:13:27.528 }' 00:13:27.528 12:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.528 12:05:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.786 12:05:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:27.786 12:05:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.786 12:05:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.786 [2024-11-19 12:05:31.100896] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:27.786 [2024-11-19 12:05:31.100972] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:27.786 [2024-11-19 12:05:31.101012] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:27.786 [2024-11-19 12:05:31.101021] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:27.786 [2024-11-19 12:05:31.101500] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:27.786 [2024-11-19 12:05:31.101516] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:27.786 [2024-11-19 12:05:31.101617] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:27.786 [2024-11-19 12:05:31.101630] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:27.786 [2024-11-19 12:05:31.101640] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:27.786 [2024-11-19 12:05:31.101659] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:27.786 [2024-11-19 12:05:31.117749] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:13:27.786 spare 00:13:27.786 12:05:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.786 12:05:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:27.786 [2024-11-19 12:05:31.119617] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:29.164 12:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:29.164 12:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:29.164 12:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:29.164 12:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:29.164 12:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:29.164 12:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.164 12:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.164 12:05:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.164 12:05:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.164 12:05:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.164 12:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:29.164 "name": "raid_bdev1", 00:13:29.164 "uuid": "1b6385ef-8e81-4e11-a01c-a4b35dc0ef7e", 00:13:29.164 "strip_size_kb": 0, 00:13:29.164 "state": "online", 00:13:29.164 "raid_level": "raid1", 00:13:29.164 "superblock": true, 00:13:29.164 "num_base_bdevs": 2, 00:13:29.164 "num_base_bdevs_discovered": 2, 00:13:29.164 "num_base_bdevs_operational": 2, 00:13:29.164 "process": { 00:13:29.164 "type": "rebuild", 00:13:29.164 "target": "spare", 00:13:29.164 "progress": { 00:13:29.164 "blocks": 20480, 00:13:29.164 "percent": 32 00:13:29.164 } 00:13:29.164 }, 00:13:29.164 "base_bdevs_list": [ 00:13:29.164 { 00:13:29.164 "name": "spare", 00:13:29.164 "uuid": "b2ba0505-156d-557b-a45b-938c07586a2c", 00:13:29.164 "is_configured": true, 00:13:29.164 "data_offset": 2048, 00:13:29.164 "data_size": 63488 00:13:29.164 }, 00:13:29.164 { 00:13:29.164 "name": "BaseBdev2", 00:13:29.164 "uuid": "c66fd461-1324-584f-9136-380247018b86", 00:13:29.164 "is_configured": true, 00:13:29.164 "data_offset": 2048, 00:13:29.164 "data_size": 63488 00:13:29.164 } 00:13:29.164 ] 00:13:29.164 }' 00:13:29.164 12:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:29.164 12:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:29.164 12:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:29.164 12:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:29.164 12:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:29.164 12:05:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.164 12:05:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.164 [2024-11-19 12:05:32.259041] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:29.164 [2024-11-19 12:05:32.324915] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:29.164 [2024-11-19 12:05:32.324995] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:29.164 [2024-11-19 12:05:32.325021] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:29.164 [2024-11-19 12:05:32.325031] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:29.164 12:05:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.164 12:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:29.164 12:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:29.164 12:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:29.164 12:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:29.164 12:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:29.164 12:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:29.164 12:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.164 12:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.164 12:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.164 12:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.164 12:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.164 12:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.164 12:05:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.164 12:05:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.164 12:05:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.164 12:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.164 "name": "raid_bdev1", 00:13:29.164 "uuid": "1b6385ef-8e81-4e11-a01c-a4b35dc0ef7e", 00:13:29.164 "strip_size_kb": 0, 00:13:29.164 "state": "online", 00:13:29.164 "raid_level": "raid1", 00:13:29.164 "superblock": true, 00:13:29.164 "num_base_bdevs": 2, 00:13:29.164 "num_base_bdevs_discovered": 1, 00:13:29.164 "num_base_bdevs_operational": 1, 00:13:29.164 "base_bdevs_list": [ 00:13:29.164 { 00:13:29.164 "name": null, 00:13:29.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.164 "is_configured": false, 00:13:29.164 "data_offset": 0, 00:13:29.164 "data_size": 63488 00:13:29.164 }, 00:13:29.164 { 00:13:29.164 "name": "BaseBdev2", 00:13:29.164 "uuid": "c66fd461-1324-584f-9136-380247018b86", 00:13:29.164 "is_configured": true, 00:13:29.164 "data_offset": 2048, 00:13:29.164 "data_size": 63488 00:13:29.164 } 00:13:29.164 ] 00:13:29.164 }' 00:13:29.164 12:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.164 12:05:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.423 12:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:29.423 12:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:29.423 12:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:29.423 12:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:29.423 12:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:29.423 12:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.423 12:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.423 12:05:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.423 12:05:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.681 12:05:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.681 12:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:29.681 "name": "raid_bdev1", 00:13:29.681 "uuid": "1b6385ef-8e81-4e11-a01c-a4b35dc0ef7e", 00:13:29.681 "strip_size_kb": 0, 00:13:29.681 "state": "online", 00:13:29.681 "raid_level": "raid1", 00:13:29.681 "superblock": true, 00:13:29.681 "num_base_bdevs": 2, 00:13:29.681 "num_base_bdevs_discovered": 1, 00:13:29.681 "num_base_bdevs_operational": 1, 00:13:29.681 "base_bdevs_list": [ 00:13:29.681 { 00:13:29.681 "name": null, 00:13:29.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.681 "is_configured": false, 00:13:29.682 "data_offset": 0, 00:13:29.682 "data_size": 63488 00:13:29.682 }, 00:13:29.682 { 00:13:29.682 "name": "BaseBdev2", 00:13:29.682 "uuid": "c66fd461-1324-584f-9136-380247018b86", 00:13:29.682 "is_configured": true, 00:13:29.682 "data_offset": 2048, 00:13:29.682 "data_size": 63488 00:13:29.682 } 00:13:29.682 ] 00:13:29.682 }' 00:13:29.682 12:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:29.682 12:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:29.682 12:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:29.682 12:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:29.682 12:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:29.682 12:05:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.682 12:05:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.682 12:05:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.682 12:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:29.682 12:05:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.682 12:05:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.682 [2024-11-19 12:05:32.914417] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:29.682 [2024-11-19 12:05:32.914480] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:29.682 [2024-11-19 12:05:32.914501] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:13:29.682 [2024-11-19 12:05:32.914512] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:29.682 [2024-11-19 12:05:32.914952] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:29.682 [2024-11-19 12:05:32.914975] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:29.682 [2024-11-19 12:05:32.915080] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:29.682 [2024-11-19 12:05:32.915099] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:29.682 [2024-11-19 12:05:32.915106] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:29.682 [2024-11-19 12:05:32.915119] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:29.682 BaseBdev1 00:13:29.682 12:05:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.682 12:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:30.617 12:05:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:30.617 12:05:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:30.617 12:05:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:30.617 12:05:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:30.617 12:05:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:30.617 12:05:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:30.617 12:05:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:30.617 12:05:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:30.617 12:05:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:30.617 12:05:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:30.617 12:05:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.617 12:05:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.617 12:05:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.617 12:05:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:30.617 12:05:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.617 12:05:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:30.617 "name": "raid_bdev1", 00:13:30.617 "uuid": "1b6385ef-8e81-4e11-a01c-a4b35dc0ef7e", 00:13:30.617 "strip_size_kb": 0, 00:13:30.617 "state": "online", 00:13:30.617 "raid_level": "raid1", 00:13:30.617 "superblock": true, 00:13:30.617 "num_base_bdevs": 2, 00:13:30.617 "num_base_bdevs_discovered": 1, 00:13:30.617 "num_base_bdevs_operational": 1, 00:13:30.617 "base_bdevs_list": [ 00:13:30.617 { 00:13:30.617 "name": null, 00:13:30.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.617 "is_configured": false, 00:13:30.617 "data_offset": 0, 00:13:30.617 "data_size": 63488 00:13:30.617 }, 00:13:30.617 { 00:13:30.617 "name": "BaseBdev2", 00:13:30.617 "uuid": "c66fd461-1324-584f-9136-380247018b86", 00:13:30.617 "is_configured": true, 00:13:30.617 "data_offset": 2048, 00:13:30.617 "data_size": 63488 00:13:30.617 } 00:13:30.617 ] 00:13:30.617 }' 00:13:30.617 12:05:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:30.617 12:05:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.184 12:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:31.184 12:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:31.184 12:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:31.184 12:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:31.184 12:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:31.184 12:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.184 12:05:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.184 12:05:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.184 12:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.184 12:05:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.184 12:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:31.184 "name": "raid_bdev1", 00:13:31.184 "uuid": "1b6385ef-8e81-4e11-a01c-a4b35dc0ef7e", 00:13:31.184 "strip_size_kb": 0, 00:13:31.184 "state": "online", 00:13:31.184 "raid_level": "raid1", 00:13:31.184 "superblock": true, 00:13:31.184 "num_base_bdevs": 2, 00:13:31.184 "num_base_bdevs_discovered": 1, 00:13:31.184 "num_base_bdevs_operational": 1, 00:13:31.184 "base_bdevs_list": [ 00:13:31.184 { 00:13:31.184 "name": null, 00:13:31.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.184 "is_configured": false, 00:13:31.184 "data_offset": 0, 00:13:31.184 "data_size": 63488 00:13:31.184 }, 00:13:31.184 { 00:13:31.184 "name": "BaseBdev2", 00:13:31.184 "uuid": "c66fd461-1324-584f-9136-380247018b86", 00:13:31.184 "is_configured": true, 00:13:31.184 "data_offset": 2048, 00:13:31.184 "data_size": 63488 00:13:31.184 } 00:13:31.184 ] 00:13:31.185 }' 00:13:31.185 12:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:31.185 12:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:31.185 12:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:31.185 12:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:31.185 12:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:31.185 12:05:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:13:31.185 12:05:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:31.185 12:05:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:31.185 12:05:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:31.185 12:05:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:31.185 12:05:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:31.185 12:05:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:31.185 12:05:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.185 12:05:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.185 [2024-11-19 12:05:34.471967] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:31.185 [2024-11-19 12:05:34.472145] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:31.185 [2024-11-19 12:05:34.472157] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:31.185 request: 00:13:31.185 { 00:13:31.185 "base_bdev": "BaseBdev1", 00:13:31.185 "raid_bdev": "raid_bdev1", 00:13:31.185 "method": "bdev_raid_add_base_bdev", 00:13:31.185 "req_id": 1 00:13:31.185 } 00:13:31.185 Got JSON-RPC error response 00:13:31.185 response: 00:13:31.185 { 00:13:31.185 "code": -22, 00:13:31.185 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:31.185 } 00:13:31.185 12:05:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:31.185 12:05:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:13:31.185 12:05:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:31.185 12:05:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:31.185 12:05:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:31.185 12:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:32.123 12:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:32.123 12:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:32.123 12:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:32.123 12:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:32.123 12:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:32.123 12:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:32.123 12:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:32.123 12:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:32.123 12:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:32.123 12:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:32.123 12:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.123 12:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.123 12:05:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.123 12:05:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.381 12:05:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.381 12:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:32.381 "name": "raid_bdev1", 00:13:32.381 "uuid": "1b6385ef-8e81-4e11-a01c-a4b35dc0ef7e", 00:13:32.381 "strip_size_kb": 0, 00:13:32.381 "state": "online", 00:13:32.381 "raid_level": "raid1", 00:13:32.381 "superblock": true, 00:13:32.381 "num_base_bdevs": 2, 00:13:32.381 "num_base_bdevs_discovered": 1, 00:13:32.381 "num_base_bdevs_operational": 1, 00:13:32.381 "base_bdevs_list": [ 00:13:32.381 { 00:13:32.381 "name": null, 00:13:32.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.381 "is_configured": false, 00:13:32.381 "data_offset": 0, 00:13:32.381 "data_size": 63488 00:13:32.381 }, 00:13:32.381 { 00:13:32.381 "name": "BaseBdev2", 00:13:32.381 "uuid": "c66fd461-1324-584f-9136-380247018b86", 00:13:32.381 "is_configured": true, 00:13:32.381 "data_offset": 2048, 00:13:32.381 "data_size": 63488 00:13:32.381 } 00:13:32.381 ] 00:13:32.381 }' 00:13:32.381 12:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:32.381 12:05:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.640 12:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:32.640 12:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:32.640 12:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:32.640 12:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:32.640 12:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:32.640 12:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.640 12:05:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.640 12:05:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.640 12:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.640 12:05:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.640 12:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:32.640 "name": "raid_bdev1", 00:13:32.640 "uuid": "1b6385ef-8e81-4e11-a01c-a4b35dc0ef7e", 00:13:32.640 "strip_size_kb": 0, 00:13:32.640 "state": "online", 00:13:32.640 "raid_level": "raid1", 00:13:32.640 "superblock": true, 00:13:32.640 "num_base_bdevs": 2, 00:13:32.640 "num_base_bdevs_discovered": 1, 00:13:32.640 "num_base_bdevs_operational": 1, 00:13:32.640 "base_bdevs_list": [ 00:13:32.640 { 00:13:32.640 "name": null, 00:13:32.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.640 "is_configured": false, 00:13:32.640 "data_offset": 0, 00:13:32.640 "data_size": 63488 00:13:32.640 }, 00:13:32.640 { 00:13:32.640 "name": "BaseBdev2", 00:13:32.640 "uuid": "c66fd461-1324-584f-9136-380247018b86", 00:13:32.640 "is_configured": true, 00:13:32.640 "data_offset": 2048, 00:13:32.640 "data_size": 63488 00:13:32.640 } 00:13:32.640 ] 00:13:32.640 }' 00:13:32.640 12:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:32.640 12:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:32.640 12:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:32.899 12:05:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:32.899 12:05:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 76846 00:13:32.899 12:05:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 76846 ']' 00:13:32.899 12:05:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 76846 00:13:32.899 12:05:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:13:32.899 12:05:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:32.899 12:05:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76846 00:13:32.899 killing process with pid 76846 00:13:32.899 Received shutdown signal, test time was about 16.770668 seconds 00:13:32.899 00:13:32.899 Latency(us) 00:13:32.899 [2024-11-19T12:05:36.276Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:32.899 [2024-11-19T12:05:36.276Z] =================================================================================================================== 00:13:32.899 [2024-11-19T12:05:36.276Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:32.899 12:05:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:32.899 12:05:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:32.899 12:05:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76846' 00:13:32.899 12:05:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 76846 00:13:32.899 12:05:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 76846 00:13:32.899 [2024-11-19 12:05:36.060177] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:32.899 [2024-11-19 12:05:36.060304] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:32.899 [2024-11-19 12:05:36.060409] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:32.899 [2024-11-19 12:05:36.060423] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:33.158 [2024-11-19 12:05:36.279041] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:34.107 12:05:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:34.107 00:13:34.107 real 0m19.842s 00:13:34.107 user 0m25.914s 00:13:34.107 sys 0m2.087s 00:13:34.107 ************************************ 00:13:34.107 END TEST raid_rebuild_test_sb_io 00:13:34.107 ************************************ 00:13:34.107 12:05:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:34.107 12:05:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.107 12:05:37 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:13:34.107 12:05:37 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:13:34.107 12:05:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:34.107 12:05:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:34.108 12:05:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:34.108 ************************************ 00:13:34.108 START TEST raid_rebuild_test 00:13:34.108 ************************************ 00:13:34.108 12:05:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:13:34.108 12:05:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:34.108 12:05:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:34.108 12:05:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:34.108 12:05:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:34.108 12:05:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:34.108 12:05:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:34.108 12:05:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:34.108 12:05:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:34.108 12:05:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:34.108 12:05:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:34.108 12:05:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:34.108 12:05:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:34.108 12:05:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:34.108 12:05:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:34.108 12:05:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:34.108 12:05:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:34.108 12:05:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:34.108 12:05:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:34.108 12:05:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:34.108 12:05:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:34.108 12:05:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:34.108 12:05:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:34.108 12:05:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:34.108 12:05:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:34.108 12:05:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:34.108 12:05:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:34.108 12:05:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:34.108 12:05:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:34.108 12:05:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:34.108 12:05:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77535 00:13:34.108 12:05:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:34.108 12:05:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77535 00:13:34.108 12:05:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 77535 ']' 00:13:34.108 12:05:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:34.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:34.108 12:05:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:34.108 12:05:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:34.108 12:05:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:34.108 12:05:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.367 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:34.367 Zero copy mechanism will not be used. 00:13:34.367 [2024-11-19 12:05:37.560354] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:13:34.367 [2024-11-19 12:05:37.560468] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77535 ] 00:13:34.367 [2024-11-19 12:05:37.732353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:34.626 [2024-11-19 12:05:37.846460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:34.885 [2024-11-19 12:05:38.040058] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:34.885 [2024-11-19 12:05:38.040095] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:35.145 12:05:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:35.145 12:05:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:13:35.145 12:05:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:35.145 12:05:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:35.145 12:05:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.145 12:05:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.145 BaseBdev1_malloc 00:13:35.145 12:05:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.145 12:05:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:35.145 12:05:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.145 12:05:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.145 [2024-11-19 12:05:38.420887] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:35.145 [2024-11-19 12:05:38.420956] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:35.145 [2024-11-19 12:05:38.420976] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:35.145 [2024-11-19 12:05:38.420987] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:35.145 [2024-11-19 12:05:38.423096] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:35.145 [2024-11-19 12:05:38.423133] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:35.145 BaseBdev1 00:13:35.145 12:05:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.145 12:05:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:35.145 12:05:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:35.145 12:05:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.145 12:05:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.145 BaseBdev2_malloc 00:13:35.145 12:05:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.145 12:05:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:35.145 12:05:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.145 12:05:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.145 [2024-11-19 12:05:38.473165] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:35.145 [2024-11-19 12:05:38.473239] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:35.145 [2024-11-19 12:05:38.473255] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:35.145 [2024-11-19 12:05:38.473265] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:35.145 [2024-11-19 12:05:38.475265] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:35.145 [2024-11-19 12:05:38.475302] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:35.145 BaseBdev2 00:13:35.145 12:05:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.145 12:05:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:35.145 12:05:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:35.145 12:05:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.145 12:05:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.405 BaseBdev3_malloc 00:13:35.405 12:05:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.405 12:05:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:35.405 12:05:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.405 12:05:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.405 [2024-11-19 12:05:38.557942] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:35.405 [2024-11-19 12:05:38.558053] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:35.405 [2024-11-19 12:05:38.558093] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:35.405 [2024-11-19 12:05:38.558105] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:35.405 [2024-11-19 12:05:38.560159] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:35.405 [2024-11-19 12:05:38.560198] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:35.405 BaseBdev3 00:13:35.405 12:05:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.405 12:05:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:35.405 12:05:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:35.405 12:05:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.405 12:05:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.405 BaseBdev4_malloc 00:13:35.405 12:05:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.405 12:05:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:35.405 12:05:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.405 12:05:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.405 [2024-11-19 12:05:38.612241] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:35.405 [2024-11-19 12:05:38.612291] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:35.405 [2024-11-19 12:05:38.612319] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:35.405 [2024-11-19 12:05:38.612345] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:35.405 [2024-11-19 12:05:38.614393] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:35.405 [2024-11-19 12:05:38.614465] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:35.405 BaseBdev4 00:13:35.405 12:05:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.405 12:05:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:35.405 12:05:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.405 12:05:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.405 spare_malloc 00:13:35.405 12:05:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.405 12:05:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:35.405 12:05:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.405 12:05:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.405 spare_delay 00:13:35.405 12:05:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.405 12:05:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:35.405 12:05:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.405 12:05:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.405 [2024-11-19 12:05:38.676273] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:35.405 [2024-11-19 12:05:38.676328] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:35.405 [2024-11-19 12:05:38.676346] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:35.405 [2024-11-19 12:05:38.676355] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:35.405 [2024-11-19 12:05:38.678325] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:35.405 [2024-11-19 12:05:38.678412] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:35.405 spare 00:13:35.405 12:05:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.405 12:05:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:35.405 12:05:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.405 12:05:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.405 [2024-11-19 12:05:38.688300] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:35.405 [2024-11-19 12:05:38.690072] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:35.405 [2024-11-19 12:05:38.690141] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:35.405 [2024-11-19 12:05:38.690194] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:35.405 [2024-11-19 12:05:38.690270] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:35.405 [2024-11-19 12:05:38.690283] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:35.405 [2024-11-19 12:05:38.690529] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:35.405 [2024-11-19 12:05:38.690689] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:35.405 [2024-11-19 12:05:38.690701] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:35.405 [2024-11-19 12:05:38.690863] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:35.405 12:05:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.405 12:05:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:35.405 12:05:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:35.405 12:05:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:35.405 12:05:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:35.405 12:05:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:35.405 12:05:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:35.406 12:05:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.406 12:05:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.406 12:05:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.406 12:05:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.406 12:05:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.406 12:05:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.406 12:05:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.406 12:05:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.406 12:05:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.406 12:05:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.406 "name": "raid_bdev1", 00:13:35.406 "uuid": "16fbee79-285a-47d9-9e34-01fc4c41477d", 00:13:35.406 "strip_size_kb": 0, 00:13:35.406 "state": "online", 00:13:35.406 "raid_level": "raid1", 00:13:35.406 "superblock": false, 00:13:35.406 "num_base_bdevs": 4, 00:13:35.406 "num_base_bdevs_discovered": 4, 00:13:35.406 "num_base_bdevs_operational": 4, 00:13:35.406 "base_bdevs_list": [ 00:13:35.406 { 00:13:35.406 "name": "BaseBdev1", 00:13:35.406 "uuid": "973d369b-0bcb-5261-9021-3943a0329d24", 00:13:35.406 "is_configured": true, 00:13:35.406 "data_offset": 0, 00:13:35.406 "data_size": 65536 00:13:35.406 }, 00:13:35.406 { 00:13:35.406 "name": "BaseBdev2", 00:13:35.406 "uuid": "d5335734-a3ad-5a2b-8183-57380b15d568", 00:13:35.406 "is_configured": true, 00:13:35.406 "data_offset": 0, 00:13:35.406 "data_size": 65536 00:13:35.406 }, 00:13:35.406 { 00:13:35.406 "name": "BaseBdev3", 00:13:35.406 "uuid": "6f89e45d-e1c7-53ce-9c77-705b21080e55", 00:13:35.406 "is_configured": true, 00:13:35.406 "data_offset": 0, 00:13:35.406 "data_size": 65536 00:13:35.406 }, 00:13:35.406 { 00:13:35.406 "name": "BaseBdev4", 00:13:35.406 "uuid": "19f94efa-0bfa-5b93-bfcb-fb4c9aa783ff", 00:13:35.406 "is_configured": true, 00:13:35.406 "data_offset": 0, 00:13:35.406 "data_size": 65536 00:13:35.406 } 00:13:35.406 ] 00:13:35.406 }' 00:13:35.406 12:05:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.406 12:05:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.977 12:05:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:35.977 12:05:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:35.977 12:05:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.977 12:05:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.977 [2024-11-19 12:05:39.139847] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:35.977 12:05:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.977 12:05:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:35.977 12:05:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.977 12:05:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:35.977 12:05:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.977 12:05:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.977 12:05:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.977 12:05:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:35.977 12:05:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:35.977 12:05:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:35.977 12:05:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:35.978 12:05:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:35.978 12:05:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:35.978 12:05:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:35.978 12:05:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:35.978 12:05:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:35.978 12:05:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:35.978 12:05:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:35.978 12:05:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:35.978 12:05:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:35.978 12:05:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:36.240 [2024-11-19 12:05:39.391242] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:36.240 /dev/nbd0 00:13:36.240 12:05:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:36.240 12:05:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:36.240 12:05:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:36.240 12:05:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:36.240 12:05:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:36.240 12:05:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:36.240 12:05:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:36.240 12:05:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:36.240 12:05:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:36.240 12:05:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:36.240 12:05:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:36.240 1+0 records in 00:13:36.240 1+0 records out 00:13:36.240 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000281994 s, 14.5 MB/s 00:13:36.240 12:05:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:36.240 12:05:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:36.240 12:05:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:36.240 12:05:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:36.240 12:05:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:36.240 12:05:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:36.240 12:05:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:36.240 12:05:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:36.240 12:05:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:36.240 12:05:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:13:41.512 65536+0 records in 00:13:41.512 65536+0 records out 00:13:41.512 33554432 bytes (34 MB, 32 MiB) copied, 5.13054 s, 6.5 MB/s 00:13:41.512 12:05:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:41.512 12:05:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:41.512 12:05:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:41.512 12:05:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:41.512 12:05:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:41.512 12:05:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:41.512 12:05:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:41.512 [2024-11-19 12:05:44.791440] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:41.512 12:05:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:41.512 12:05:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:41.512 12:05:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:41.512 12:05:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:41.512 12:05:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:41.512 12:05:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:41.512 12:05:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:41.512 12:05:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:41.512 12:05:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:41.512 12:05:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.512 12:05:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.512 [2024-11-19 12:05:44.833441] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:41.512 12:05:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.512 12:05:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:41.512 12:05:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:41.512 12:05:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:41.512 12:05:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:41.512 12:05:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:41.512 12:05:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:41.512 12:05:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.512 12:05:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.512 12:05:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.512 12:05:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.512 12:05:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.512 12:05:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.512 12:05:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.512 12:05:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.512 12:05:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.771 12:05:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.771 "name": "raid_bdev1", 00:13:41.772 "uuid": "16fbee79-285a-47d9-9e34-01fc4c41477d", 00:13:41.772 "strip_size_kb": 0, 00:13:41.772 "state": "online", 00:13:41.772 "raid_level": "raid1", 00:13:41.772 "superblock": false, 00:13:41.772 "num_base_bdevs": 4, 00:13:41.772 "num_base_bdevs_discovered": 3, 00:13:41.772 "num_base_bdevs_operational": 3, 00:13:41.772 "base_bdevs_list": [ 00:13:41.772 { 00:13:41.772 "name": null, 00:13:41.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.772 "is_configured": false, 00:13:41.772 "data_offset": 0, 00:13:41.772 "data_size": 65536 00:13:41.772 }, 00:13:41.772 { 00:13:41.772 "name": "BaseBdev2", 00:13:41.772 "uuid": "d5335734-a3ad-5a2b-8183-57380b15d568", 00:13:41.772 "is_configured": true, 00:13:41.772 "data_offset": 0, 00:13:41.772 "data_size": 65536 00:13:41.772 }, 00:13:41.772 { 00:13:41.772 "name": "BaseBdev3", 00:13:41.772 "uuid": "6f89e45d-e1c7-53ce-9c77-705b21080e55", 00:13:41.772 "is_configured": true, 00:13:41.772 "data_offset": 0, 00:13:41.772 "data_size": 65536 00:13:41.772 }, 00:13:41.772 { 00:13:41.772 "name": "BaseBdev4", 00:13:41.772 "uuid": "19f94efa-0bfa-5b93-bfcb-fb4c9aa783ff", 00:13:41.772 "is_configured": true, 00:13:41.772 "data_offset": 0, 00:13:41.772 "data_size": 65536 00:13:41.772 } 00:13:41.772 ] 00:13:41.772 }' 00:13:41.772 12:05:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.772 12:05:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.031 12:05:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:42.031 12:05:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.031 12:05:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.031 [2024-11-19 12:05:45.296622] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:42.031 [2024-11-19 12:05:45.311254] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:13:42.031 12:05:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.031 12:05:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:42.031 [2024-11-19 12:05:45.313035] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:42.967 12:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:42.967 12:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:42.967 12:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:42.967 12:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:42.967 12:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:42.967 12:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.967 12:05:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.967 12:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.967 12:05:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.226 12:05:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.226 12:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:43.226 "name": "raid_bdev1", 00:13:43.226 "uuid": "16fbee79-285a-47d9-9e34-01fc4c41477d", 00:13:43.226 "strip_size_kb": 0, 00:13:43.226 "state": "online", 00:13:43.226 "raid_level": "raid1", 00:13:43.226 "superblock": false, 00:13:43.226 "num_base_bdevs": 4, 00:13:43.226 "num_base_bdevs_discovered": 4, 00:13:43.226 "num_base_bdevs_operational": 4, 00:13:43.226 "process": { 00:13:43.226 "type": "rebuild", 00:13:43.226 "target": "spare", 00:13:43.226 "progress": { 00:13:43.226 "blocks": 20480, 00:13:43.226 "percent": 31 00:13:43.226 } 00:13:43.226 }, 00:13:43.226 "base_bdevs_list": [ 00:13:43.226 { 00:13:43.226 "name": "spare", 00:13:43.226 "uuid": "21edcf1d-671d-58ea-be55-5bd045a3c8f1", 00:13:43.226 "is_configured": true, 00:13:43.226 "data_offset": 0, 00:13:43.226 "data_size": 65536 00:13:43.226 }, 00:13:43.226 { 00:13:43.226 "name": "BaseBdev2", 00:13:43.226 "uuid": "d5335734-a3ad-5a2b-8183-57380b15d568", 00:13:43.226 "is_configured": true, 00:13:43.226 "data_offset": 0, 00:13:43.226 "data_size": 65536 00:13:43.226 }, 00:13:43.226 { 00:13:43.226 "name": "BaseBdev3", 00:13:43.226 "uuid": "6f89e45d-e1c7-53ce-9c77-705b21080e55", 00:13:43.226 "is_configured": true, 00:13:43.226 "data_offset": 0, 00:13:43.226 "data_size": 65536 00:13:43.226 }, 00:13:43.226 { 00:13:43.226 "name": "BaseBdev4", 00:13:43.226 "uuid": "19f94efa-0bfa-5b93-bfcb-fb4c9aa783ff", 00:13:43.226 "is_configured": true, 00:13:43.226 "data_offset": 0, 00:13:43.226 "data_size": 65536 00:13:43.226 } 00:13:43.226 ] 00:13:43.226 }' 00:13:43.226 12:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:43.226 12:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:43.226 12:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:43.226 12:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:43.226 12:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:43.226 12:05:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.226 12:05:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.226 [2024-11-19 12:05:46.476511] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:43.226 [2024-11-19 12:05:46.518028] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:43.226 [2024-11-19 12:05:46.518084] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:43.226 [2024-11-19 12:05:46.518099] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:43.226 [2024-11-19 12:05:46.518108] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:43.226 12:05:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.226 12:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:43.226 12:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:43.226 12:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:43.226 12:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:43.226 12:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:43.226 12:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:43.226 12:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.226 12:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.226 12:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.226 12:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.226 12:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.226 12:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.226 12:05:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.226 12:05:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.226 12:05:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.227 12:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.227 "name": "raid_bdev1", 00:13:43.227 "uuid": "16fbee79-285a-47d9-9e34-01fc4c41477d", 00:13:43.227 "strip_size_kb": 0, 00:13:43.227 "state": "online", 00:13:43.227 "raid_level": "raid1", 00:13:43.227 "superblock": false, 00:13:43.227 "num_base_bdevs": 4, 00:13:43.227 "num_base_bdevs_discovered": 3, 00:13:43.227 "num_base_bdevs_operational": 3, 00:13:43.227 "base_bdevs_list": [ 00:13:43.227 { 00:13:43.227 "name": null, 00:13:43.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.227 "is_configured": false, 00:13:43.227 "data_offset": 0, 00:13:43.227 "data_size": 65536 00:13:43.227 }, 00:13:43.227 { 00:13:43.227 "name": "BaseBdev2", 00:13:43.227 "uuid": "d5335734-a3ad-5a2b-8183-57380b15d568", 00:13:43.227 "is_configured": true, 00:13:43.227 "data_offset": 0, 00:13:43.227 "data_size": 65536 00:13:43.227 }, 00:13:43.227 { 00:13:43.227 "name": "BaseBdev3", 00:13:43.227 "uuid": "6f89e45d-e1c7-53ce-9c77-705b21080e55", 00:13:43.227 "is_configured": true, 00:13:43.227 "data_offset": 0, 00:13:43.227 "data_size": 65536 00:13:43.227 }, 00:13:43.227 { 00:13:43.227 "name": "BaseBdev4", 00:13:43.227 "uuid": "19f94efa-0bfa-5b93-bfcb-fb4c9aa783ff", 00:13:43.227 "is_configured": true, 00:13:43.227 "data_offset": 0, 00:13:43.227 "data_size": 65536 00:13:43.227 } 00:13:43.227 ] 00:13:43.227 }' 00:13:43.227 12:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.227 12:05:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.795 12:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:43.795 12:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:43.795 12:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:43.795 12:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:43.795 12:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:43.795 12:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.795 12:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.795 12:05:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.795 12:05:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.795 12:05:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.795 12:05:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:43.795 "name": "raid_bdev1", 00:13:43.795 "uuid": "16fbee79-285a-47d9-9e34-01fc4c41477d", 00:13:43.795 "strip_size_kb": 0, 00:13:43.795 "state": "online", 00:13:43.795 "raid_level": "raid1", 00:13:43.795 "superblock": false, 00:13:43.795 "num_base_bdevs": 4, 00:13:43.795 "num_base_bdevs_discovered": 3, 00:13:43.795 "num_base_bdevs_operational": 3, 00:13:43.795 "base_bdevs_list": [ 00:13:43.795 { 00:13:43.795 "name": null, 00:13:43.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.795 "is_configured": false, 00:13:43.795 "data_offset": 0, 00:13:43.795 "data_size": 65536 00:13:43.795 }, 00:13:43.795 { 00:13:43.795 "name": "BaseBdev2", 00:13:43.795 "uuid": "d5335734-a3ad-5a2b-8183-57380b15d568", 00:13:43.795 "is_configured": true, 00:13:43.795 "data_offset": 0, 00:13:43.795 "data_size": 65536 00:13:43.795 }, 00:13:43.795 { 00:13:43.795 "name": "BaseBdev3", 00:13:43.795 "uuid": "6f89e45d-e1c7-53ce-9c77-705b21080e55", 00:13:43.795 "is_configured": true, 00:13:43.795 "data_offset": 0, 00:13:43.795 "data_size": 65536 00:13:43.795 }, 00:13:43.795 { 00:13:43.795 "name": "BaseBdev4", 00:13:43.795 "uuid": "19f94efa-0bfa-5b93-bfcb-fb4c9aa783ff", 00:13:43.795 "is_configured": true, 00:13:43.796 "data_offset": 0, 00:13:43.796 "data_size": 65536 00:13:43.796 } 00:13:43.796 ] 00:13:43.796 }' 00:13:43.796 12:05:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:43.796 12:05:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:43.796 12:05:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:43.796 12:05:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:43.796 12:05:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:43.796 12:05:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.796 12:05:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.796 [2024-11-19 12:05:47.101881] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:43.796 [2024-11-19 12:05:47.116636] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:13:43.796 12:05:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.796 12:05:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:43.796 [2024-11-19 12:05:47.118491] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:45.175 12:05:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:45.175 12:05:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:45.175 12:05:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:45.175 12:05:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:45.175 12:05:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:45.175 12:05:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.175 12:05:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.175 12:05:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.175 12:05:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.175 12:05:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.175 12:05:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:45.175 "name": "raid_bdev1", 00:13:45.175 "uuid": "16fbee79-285a-47d9-9e34-01fc4c41477d", 00:13:45.175 "strip_size_kb": 0, 00:13:45.175 "state": "online", 00:13:45.175 "raid_level": "raid1", 00:13:45.175 "superblock": false, 00:13:45.175 "num_base_bdevs": 4, 00:13:45.175 "num_base_bdevs_discovered": 4, 00:13:45.175 "num_base_bdevs_operational": 4, 00:13:45.175 "process": { 00:13:45.175 "type": "rebuild", 00:13:45.175 "target": "spare", 00:13:45.175 "progress": { 00:13:45.175 "blocks": 20480, 00:13:45.175 "percent": 31 00:13:45.175 } 00:13:45.175 }, 00:13:45.175 "base_bdevs_list": [ 00:13:45.175 { 00:13:45.175 "name": "spare", 00:13:45.175 "uuid": "21edcf1d-671d-58ea-be55-5bd045a3c8f1", 00:13:45.175 "is_configured": true, 00:13:45.175 "data_offset": 0, 00:13:45.175 "data_size": 65536 00:13:45.175 }, 00:13:45.175 { 00:13:45.175 "name": "BaseBdev2", 00:13:45.175 "uuid": "d5335734-a3ad-5a2b-8183-57380b15d568", 00:13:45.175 "is_configured": true, 00:13:45.175 "data_offset": 0, 00:13:45.175 "data_size": 65536 00:13:45.175 }, 00:13:45.175 { 00:13:45.175 "name": "BaseBdev3", 00:13:45.175 "uuid": "6f89e45d-e1c7-53ce-9c77-705b21080e55", 00:13:45.175 "is_configured": true, 00:13:45.175 "data_offset": 0, 00:13:45.175 "data_size": 65536 00:13:45.175 }, 00:13:45.175 { 00:13:45.175 "name": "BaseBdev4", 00:13:45.175 "uuid": "19f94efa-0bfa-5b93-bfcb-fb4c9aa783ff", 00:13:45.175 "is_configured": true, 00:13:45.175 "data_offset": 0, 00:13:45.175 "data_size": 65536 00:13:45.175 } 00:13:45.175 ] 00:13:45.175 }' 00:13:45.175 12:05:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:45.175 12:05:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:45.175 12:05:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:45.175 12:05:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:45.175 12:05:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:45.175 12:05:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:45.175 12:05:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:45.175 12:05:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:45.175 12:05:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:45.175 12:05:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.175 12:05:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.175 [2024-11-19 12:05:48.278315] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:45.175 [2024-11-19 12:05:48.323344] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:13:45.175 12:05:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.175 12:05:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:45.175 12:05:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:45.175 12:05:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:45.175 12:05:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:45.175 12:05:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:45.175 12:05:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:45.175 12:05:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:45.175 12:05:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.175 12:05:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.175 12:05:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.175 12:05:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.175 12:05:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.175 12:05:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:45.175 "name": "raid_bdev1", 00:13:45.175 "uuid": "16fbee79-285a-47d9-9e34-01fc4c41477d", 00:13:45.175 "strip_size_kb": 0, 00:13:45.175 "state": "online", 00:13:45.175 "raid_level": "raid1", 00:13:45.175 "superblock": false, 00:13:45.176 "num_base_bdevs": 4, 00:13:45.176 "num_base_bdevs_discovered": 3, 00:13:45.176 "num_base_bdevs_operational": 3, 00:13:45.176 "process": { 00:13:45.176 "type": "rebuild", 00:13:45.176 "target": "spare", 00:13:45.176 "progress": { 00:13:45.176 "blocks": 24576, 00:13:45.176 "percent": 37 00:13:45.176 } 00:13:45.176 }, 00:13:45.176 "base_bdevs_list": [ 00:13:45.176 { 00:13:45.176 "name": "spare", 00:13:45.176 "uuid": "21edcf1d-671d-58ea-be55-5bd045a3c8f1", 00:13:45.176 "is_configured": true, 00:13:45.176 "data_offset": 0, 00:13:45.176 "data_size": 65536 00:13:45.176 }, 00:13:45.176 { 00:13:45.176 "name": null, 00:13:45.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.176 "is_configured": false, 00:13:45.176 "data_offset": 0, 00:13:45.176 "data_size": 65536 00:13:45.176 }, 00:13:45.176 { 00:13:45.176 "name": "BaseBdev3", 00:13:45.176 "uuid": "6f89e45d-e1c7-53ce-9c77-705b21080e55", 00:13:45.176 "is_configured": true, 00:13:45.176 "data_offset": 0, 00:13:45.176 "data_size": 65536 00:13:45.176 }, 00:13:45.176 { 00:13:45.176 "name": "BaseBdev4", 00:13:45.176 "uuid": "19f94efa-0bfa-5b93-bfcb-fb4c9aa783ff", 00:13:45.176 "is_configured": true, 00:13:45.176 "data_offset": 0, 00:13:45.176 "data_size": 65536 00:13:45.176 } 00:13:45.176 ] 00:13:45.176 }' 00:13:45.176 12:05:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:45.176 12:05:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:45.176 12:05:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:45.176 12:05:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:45.176 12:05:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=441 00:13:45.176 12:05:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:45.176 12:05:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:45.176 12:05:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:45.176 12:05:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:45.176 12:05:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:45.176 12:05:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:45.176 12:05:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.176 12:05:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.176 12:05:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.176 12:05:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.176 12:05:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.176 12:05:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:45.176 "name": "raid_bdev1", 00:13:45.176 "uuid": "16fbee79-285a-47d9-9e34-01fc4c41477d", 00:13:45.176 "strip_size_kb": 0, 00:13:45.176 "state": "online", 00:13:45.176 "raid_level": "raid1", 00:13:45.176 "superblock": false, 00:13:45.176 "num_base_bdevs": 4, 00:13:45.176 "num_base_bdevs_discovered": 3, 00:13:45.176 "num_base_bdevs_operational": 3, 00:13:45.176 "process": { 00:13:45.176 "type": "rebuild", 00:13:45.176 "target": "spare", 00:13:45.176 "progress": { 00:13:45.176 "blocks": 26624, 00:13:45.176 "percent": 40 00:13:45.176 } 00:13:45.176 }, 00:13:45.176 "base_bdevs_list": [ 00:13:45.176 { 00:13:45.176 "name": "spare", 00:13:45.176 "uuid": "21edcf1d-671d-58ea-be55-5bd045a3c8f1", 00:13:45.176 "is_configured": true, 00:13:45.176 "data_offset": 0, 00:13:45.176 "data_size": 65536 00:13:45.176 }, 00:13:45.176 { 00:13:45.176 "name": null, 00:13:45.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.176 "is_configured": false, 00:13:45.176 "data_offset": 0, 00:13:45.176 "data_size": 65536 00:13:45.176 }, 00:13:45.176 { 00:13:45.176 "name": "BaseBdev3", 00:13:45.176 "uuid": "6f89e45d-e1c7-53ce-9c77-705b21080e55", 00:13:45.176 "is_configured": true, 00:13:45.176 "data_offset": 0, 00:13:45.176 "data_size": 65536 00:13:45.176 }, 00:13:45.176 { 00:13:45.176 "name": "BaseBdev4", 00:13:45.176 "uuid": "19f94efa-0bfa-5b93-bfcb-fb4c9aa783ff", 00:13:45.176 "is_configured": true, 00:13:45.176 "data_offset": 0, 00:13:45.176 "data_size": 65536 00:13:45.176 } 00:13:45.176 ] 00:13:45.176 }' 00:13:45.176 12:05:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:45.176 12:05:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:45.176 12:05:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:45.435 12:05:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:45.435 12:05:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:46.371 12:05:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:46.371 12:05:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:46.371 12:05:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:46.372 12:05:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:46.372 12:05:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:46.372 12:05:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:46.372 12:05:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.372 12:05:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.372 12:05:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.372 12:05:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.372 12:05:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.372 12:05:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:46.372 "name": "raid_bdev1", 00:13:46.372 "uuid": "16fbee79-285a-47d9-9e34-01fc4c41477d", 00:13:46.372 "strip_size_kb": 0, 00:13:46.372 "state": "online", 00:13:46.372 "raid_level": "raid1", 00:13:46.372 "superblock": false, 00:13:46.372 "num_base_bdevs": 4, 00:13:46.372 "num_base_bdevs_discovered": 3, 00:13:46.372 "num_base_bdevs_operational": 3, 00:13:46.372 "process": { 00:13:46.372 "type": "rebuild", 00:13:46.372 "target": "spare", 00:13:46.372 "progress": { 00:13:46.372 "blocks": 49152, 00:13:46.372 "percent": 75 00:13:46.372 } 00:13:46.372 }, 00:13:46.372 "base_bdevs_list": [ 00:13:46.372 { 00:13:46.372 "name": "spare", 00:13:46.372 "uuid": "21edcf1d-671d-58ea-be55-5bd045a3c8f1", 00:13:46.372 "is_configured": true, 00:13:46.372 "data_offset": 0, 00:13:46.372 "data_size": 65536 00:13:46.372 }, 00:13:46.372 { 00:13:46.372 "name": null, 00:13:46.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.372 "is_configured": false, 00:13:46.372 "data_offset": 0, 00:13:46.372 "data_size": 65536 00:13:46.372 }, 00:13:46.372 { 00:13:46.372 "name": "BaseBdev3", 00:13:46.372 "uuid": "6f89e45d-e1c7-53ce-9c77-705b21080e55", 00:13:46.372 "is_configured": true, 00:13:46.372 "data_offset": 0, 00:13:46.372 "data_size": 65536 00:13:46.372 }, 00:13:46.372 { 00:13:46.372 "name": "BaseBdev4", 00:13:46.372 "uuid": "19f94efa-0bfa-5b93-bfcb-fb4c9aa783ff", 00:13:46.372 "is_configured": true, 00:13:46.372 "data_offset": 0, 00:13:46.372 "data_size": 65536 00:13:46.372 } 00:13:46.372 ] 00:13:46.372 }' 00:13:46.372 12:05:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:46.372 12:05:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:46.372 12:05:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:46.372 12:05:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:46.372 12:05:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:47.349 [2024-11-19 12:05:50.331047] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:47.349 [2024-11-19 12:05:50.331200] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:47.349 [2024-11-19 12:05:50.331276] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:47.608 12:05:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:47.608 12:05:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:47.608 12:05:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:47.608 12:05:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:47.608 12:05:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:47.608 12:05:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:47.608 12:05:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.608 12:05:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.608 12:05:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.608 12:05:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.608 12:05:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.608 12:05:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:47.608 "name": "raid_bdev1", 00:13:47.608 "uuid": "16fbee79-285a-47d9-9e34-01fc4c41477d", 00:13:47.608 "strip_size_kb": 0, 00:13:47.608 "state": "online", 00:13:47.608 "raid_level": "raid1", 00:13:47.608 "superblock": false, 00:13:47.608 "num_base_bdevs": 4, 00:13:47.608 "num_base_bdevs_discovered": 3, 00:13:47.608 "num_base_bdevs_operational": 3, 00:13:47.608 "base_bdevs_list": [ 00:13:47.608 { 00:13:47.608 "name": "spare", 00:13:47.608 "uuid": "21edcf1d-671d-58ea-be55-5bd045a3c8f1", 00:13:47.608 "is_configured": true, 00:13:47.608 "data_offset": 0, 00:13:47.608 "data_size": 65536 00:13:47.608 }, 00:13:47.608 { 00:13:47.608 "name": null, 00:13:47.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.608 "is_configured": false, 00:13:47.608 "data_offset": 0, 00:13:47.608 "data_size": 65536 00:13:47.608 }, 00:13:47.608 { 00:13:47.608 "name": "BaseBdev3", 00:13:47.608 "uuid": "6f89e45d-e1c7-53ce-9c77-705b21080e55", 00:13:47.608 "is_configured": true, 00:13:47.608 "data_offset": 0, 00:13:47.608 "data_size": 65536 00:13:47.608 }, 00:13:47.608 { 00:13:47.608 "name": "BaseBdev4", 00:13:47.608 "uuid": "19f94efa-0bfa-5b93-bfcb-fb4c9aa783ff", 00:13:47.608 "is_configured": true, 00:13:47.608 "data_offset": 0, 00:13:47.608 "data_size": 65536 00:13:47.608 } 00:13:47.608 ] 00:13:47.608 }' 00:13:47.608 12:05:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:47.608 12:05:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:47.608 12:05:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:47.608 12:05:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:47.608 12:05:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:13:47.608 12:05:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:47.608 12:05:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:47.608 12:05:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:47.608 12:05:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:47.608 12:05:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:47.608 12:05:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.608 12:05:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.608 12:05:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.608 12:05:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.608 12:05:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.608 12:05:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:47.608 "name": "raid_bdev1", 00:13:47.608 "uuid": "16fbee79-285a-47d9-9e34-01fc4c41477d", 00:13:47.608 "strip_size_kb": 0, 00:13:47.608 "state": "online", 00:13:47.608 "raid_level": "raid1", 00:13:47.608 "superblock": false, 00:13:47.608 "num_base_bdevs": 4, 00:13:47.608 "num_base_bdevs_discovered": 3, 00:13:47.608 "num_base_bdevs_operational": 3, 00:13:47.608 "base_bdevs_list": [ 00:13:47.608 { 00:13:47.608 "name": "spare", 00:13:47.609 "uuid": "21edcf1d-671d-58ea-be55-5bd045a3c8f1", 00:13:47.609 "is_configured": true, 00:13:47.609 "data_offset": 0, 00:13:47.609 "data_size": 65536 00:13:47.609 }, 00:13:47.609 { 00:13:47.609 "name": null, 00:13:47.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.609 "is_configured": false, 00:13:47.609 "data_offset": 0, 00:13:47.609 "data_size": 65536 00:13:47.609 }, 00:13:47.609 { 00:13:47.609 "name": "BaseBdev3", 00:13:47.609 "uuid": "6f89e45d-e1c7-53ce-9c77-705b21080e55", 00:13:47.609 "is_configured": true, 00:13:47.609 "data_offset": 0, 00:13:47.609 "data_size": 65536 00:13:47.609 }, 00:13:47.609 { 00:13:47.609 "name": "BaseBdev4", 00:13:47.609 "uuid": "19f94efa-0bfa-5b93-bfcb-fb4c9aa783ff", 00:13:47.609 "is_configured": true, 00:13:47.609 "data_offset": 0, 00:13:47.609 "data_size": 65536 00:13:47.609 } 00:13:47.609 ] 00:13:47.609 }' 00:13:47.609 12:05:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:47.609 12:05:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:47.609 12:05:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:47.868 12:05:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:47.868 12:05:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:47.868 12:05:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:47.868 12:05:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:47.868 12:05:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:47.868 12:05:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:47.868 12:05:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:47.868 12:05:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.868 12:05:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.868 12:05:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.868 12:05:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.868 12:05:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.868 12:05:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.868 12:05:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.868 12:05:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.868 12:05:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.868 12:05:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.868 "name": "raid_bdev1", 00:13:47.868 "uuid": "16fbee79-285a-47d9-9e34-01fc4c41477d", 00:13:47.868 "strip_size_kb": 0, 00:13:47.868 "state": "online", 00:13:47.868 "raid_level": "raid1", 00:13:47.868 "superblock": false, 00:13:47.868 "num_base_bdevs": 4, 00:13:47.868 "num_base_bdevs_discovered": 3, 00:13:47.868 "num_base_bdevs_operational": 3, 00:13:47.868 "base_bdevs_list": [ 00:13:47.868 { 00:13:47.868 "name": "spare", 00:13:47.868 "uuid": "21edcf1d-671d-58ea-be55-5bd045a3c8f1", 00:13:47.868 "is_configured": true, 00:13:47.868 "data_offset": 0, 00:13:47.868 "data_size": 65536 00:13:47.868 }, 00:13:47.868 { 00:13:47.868 "name": null, 00:13:47.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.868 "is_configured": false, 00:13:47.868 "data_offset": 0, 00:13:47.868 "data_size": 65536 00:13:47.868 }, 00:13:47.868 { 00:13:47.868 "name": "BaseBdev3", 00:13:47.868 "uuid": "6f89e45d-e1c7-53ce-9c77-705b21080e55", 00:13:47.868 "is_configured": true, 00:13:47.868 "data_offset": 0, 00:13:47.868 "data_size": 65536 00:13:47.868 }, 00:13:47.868 { 00:13:47.868 "name": "BaseBdev4", 00:13:47.868 "uuid": "19f94efa-0bfa-5b93-bfcb-fb4c9aa783ff", 00:13:47.868 "is_configured": true, 00:13:47.868 "data_offset": 0, 00:13:47.868 "data_size": 65536 00:13:47.868 } 00:13:47.868 ] 00:13:47.868 }' 00:13:47.868 12:05:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.868 12:05:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.128 12:05:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:48.128 12:05:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.128 12:05:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.128 [2024-11-19 12:05:51.399392] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:48.128 [2024-11-19 12:05:51.399469] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:48.128 [2024-11-19 12:05:51.399586] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:48.128 [2024-11-19 12:05:51.399706] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:48.128 [2024-11-19 12:05:51.399752] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:48.128 12:05:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.128 12:05:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.128 12:05:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.128 12:05:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.128 12:05:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:13:48.128 12:05:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.128 12:05:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:48.128 12:05:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:48.128 12:05:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:48.128 12:05:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:48.128 12:05:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:48.128 12:05:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:48.128 12:05:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:48.128 12:05:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:48.128 12:05:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:48.128 12:05:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:48.128 12:05:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:48.128 12:05:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:48.128 12:05:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:48.388 /dev/nbd0 00:13:48.388 12:05:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:48.388 12:05:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:48.388 12:05:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:48.388 12:05:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:48.388 12:05:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:48.388 12:05:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:48.388 12:05:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:48.388 12:05:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:48.388 12:05:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:48.388 12:05:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:48.388 12:05:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:48.388 1+0 records in 00:13:48.388 1+0 records out 00:13:48.388 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00034285 s, 11.9 MB/s 00:13:48.388 12:05:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:48.388 12:05:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:48.388 12:05:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:48.388 12:05:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:48.388 12:05:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:48.388 12:05:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:48.388 12:05:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:48.388 12:05:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:48.649 /dev/nbd1 00:13:48.649 12:05:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:48.649 12:05:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:48.649 12:05:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:48.649 12:05:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:48.649 12:05:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:48.649 12:05:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:48.649 12:05:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:48.649 12:05:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:48.649 12:05:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:48.649 12:05:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:48.649 12:05:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:48.649 1+0 records in 00:13:48.649 1+0 records out 00:13:48.649 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000443389 s, 9.2 MB/s 00:13:48.649 12:05:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:48.649 12:05:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:48.649 12:05:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:48.649 12:05:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:48.649 12:05:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:48.649 12:05:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:48.649 12:05:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:48.649 12:05:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:48.910 12:05:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:48.910 12:05:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:48.910 12:05:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:48.910 12:05:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:48.910 12:05:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:48.910 12:05:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:48.910 12:05:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:49.170 12:05:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:49.170 12:05:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:49.170 12:05:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:49.170 12:05:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:49.170 12:05:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:49.170 12:05:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:49.170 12:05:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:49.170 12:05:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:49.170 12:05:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:49.170 12:05:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:49.170 12:05:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:49.170 12:05:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:49.170 12:05:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:49.170 12:05:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:49.170 12:05:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:49.170 12:05:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:49.430 12:05:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:49.430 12:05:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:49.430 12:05:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:49.430 12:05:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77535 00:13:49.430 12:05:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 77535 ']' 00:13:49.430 12:05:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 77535 00:13:49.430 12:05:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:13:49.430 12:05:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:49.430 12:05:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77535 00:13:49.430 killing process with pid 77535 00:13:49.430 Received shutdown signal, test time was about 60.000000 seconds 00:13:49.430 00:13:49.430 Latency(us) 00:13:49.430 [2024-11-19T12:05:52.807Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:49.430 [2024-11-19T12:05:52.807Z] =================================================================================================================== 00:13:49.430 [2024-11-19T12:05:52.807Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:49.430 12:05:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:49.430 12:05:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:49.430 12:05:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77535' 00:13:49.430 12:05:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 77535 00:13:49.430 [2024-11-19 12:05:52.586610] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:49.430 12:05:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 77535 00:13:49.690 [2024-11-19 12:05:53.046695] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:51.071 12:05:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:13:51.071 00:13:51.071 real 0m16.622s 00:13:51.071 user 0m18.780s 00:13:51.071 sys 0m2.817s 00:13:51.071 12:05:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:51.071 ************************************ 00:13:51.071 END TEST raid_rebuild_test 00:13:51.071 ************************************ 00:13:51.071 12:05:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.071 12:05:54 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:13:51.071 12:05:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:51.071 12:05:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:51.071 12:05:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:51.071 ************************************ 00:13:51.071 START TEST raid_rebuild_test_sb 00:13:51.071 ************************************ 00:13:51.071 12:05:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:13:51.071 12:05:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:51.071 12:05:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:51.071 12:05:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:51.071 12:05:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:51.071 12:05:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:51.071 12:05:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:51.071 12:05:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:51.071 12:05:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:51.071 12:05:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:51.071 12:05:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:51.071 12:05:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:51.071 12:05:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:51.071 12:05:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:51.071 12:05:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:51.071 12:05:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:51.071 12:05:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:51.071 12:05:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:51.071 12:05:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:51.071 12:05:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:51.071 12:05:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:51.071 12:05:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:51.071 12:05:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:51.071 12:05:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:51.071 12:05:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:51.071 12:05:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:51.071 12:05:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:51.071 12:05:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:51.071 12:05:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:51.071 12:05:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:51.071 12:05:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:51.071 12:05:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=77970 00:13:51.071 12:05:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:51.071 12:05:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 77970 00:13:51.071 12:05:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 77970 ']' 00:13:51.071 12:05:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:51.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:51.071 12:05:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:51.071 12:05:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:51.071 12:05:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:51.071 12:05:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.071 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:51.071 Zero copy mechanism will not be used. 00:13:51.071 [2024-11-19 12:05:54.254364] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:13:51.071 [2024-11-19 12:05:54.254490] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77970 ] 00:13:51.071 [2024-11-19 12:05:54.425677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:51.332 [2024-11-19 12:05:54.536244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.592 [2024-11-19 12:05:54.723036] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:51.592 [2024-11-19 12:05:54.723083] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:51.852 12:05:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:51.852 12:05:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:51.852 12:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:51.852 12:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:51.852 12:05:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.852 12:05:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.852 BaseBdev1_malloc 00:13:51.852 12:05:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.852 12:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:51.852 12:05:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.852 12:05:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.852 [2024-11-19 12:05:55.113650] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:51.852 [2024-11-19 12:05:55.113729] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:51.852 [2024-11-19 12:05:55.113750] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:51.852 [2024-11-19 12:05:55.113760] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:51.852 [2024-11-19 12:05:55.115765] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:51.852 [2024-11-19 12:05:55.115876] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:51.852 BaseBdev1 00:13:51.852 12:05:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.852 12:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:51.852 12:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:51.852 12:05:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.852 12:05:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.852 BaseBdev2_malloc 00:13:51.852 12:05:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.852 12:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:51.852 12:05:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.852 12:05:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.852 [2024-11-19 12:05:55.167119] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:51.852 [2024-11-19 12:05:55.167172] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:51.852 [2024-11-19 12:05:55.167189] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:51.852 [2024-11-19 12:05:55.167200] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:51.852 [2024-11-19 12:05:55.169203] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:51.852 [2024-11-19 12:05:55.169290] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:51.852 BaseBdev2 00:13:51.852 12:05:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.852 12:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:51.852 12:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:51.852 12:05:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.852 12:05:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.852 BaseBdev3_malloc 00:13:51.852 12:05:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.852 12:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:51.852 12:05:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.852 12:05:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.113 [2024-11-19 12:05:55.227098] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:52.113 [2024-11-19 12:05:55.227165] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:52.113 [2024-11-19 12:05:55.227183] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:52.113 [2024-11-19 12:05:55.227194] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:52.113 [2024-11-19 12:05:55.229196] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:52.113 [2024-11-19 12:05:55.229233] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:52.113 BaseBdev3 00:13:52.113 12:05:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.113 12:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:52.113 12:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:52.113 12:05:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.113 12:05:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.113 BaseBdev4_malloc 00:13:52.113 12:05:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.113 12:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:52.113 12:05:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.113 12:05:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.113 [2024-11-19 12:05:55.279823] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:52.113 [2024-11-19 12:05:55.279873] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:52.113 [2024-11-19 12:05:55.279892] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:52.113 [2024-11-19 12:05:55.279902] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:52.113 [2024-11-19 12:05:55.281845] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:52.113 [2024-11-19 12:05:55.281886] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:52.113 BaseBdev4 00:13:52.113 12:05:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.113 12:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:52.113 12:05:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.113 12:05:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.113 spare_malloc 00:13:52.113 12:05:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.113 12:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:52.113 12:05:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.113 12:05:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.113 spare_delay 00:13:52.113 12:05:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.113 12:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:52.113 12:05:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.113 12:05:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.113 [2024-11-19 12:05:55.345734] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:52.113 [2024-11-19 12:05:55.345787] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:52.113 [2024-11-19 12:05:55.345823] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:52.113 [2024-11-19 12:05:55.345833] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:52.113 [2024-11-19 12:05:55.347822] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:52.113 [2024-11-19 12:05:55.347860] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:52.113 spare 00:13:52.113 12:05:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.113 12:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:52.113 12:05:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.113 12:05:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.113 [2024-11-19 12:05:55.357767] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:52.113 [2024-11-19 12:05:55.359552] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:52.113 [2024-11-19 12:05:55.359616] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:52.113 [2024-11-19 12:05:55.359664] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:52.113 [2024-11-19 12:05:55.359825] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:52.113 [2024-11-19 12:05:55.359842] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:52.113 [2024-11-19 12:05:55.360075] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:52.113 [2024-11-19 12:05:55.360257] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:52.114 [2024-11-19 12:05:55.360268] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:52.114 [2024-11-19 12:05:55.360412] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:52.114 12:05:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.114 12:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:52.114 12:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:52.114 12:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:52.114 12:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:52.114 12:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:52.114 12:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:52.114 12:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.114 12:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.114 12:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.114 12:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.114 12:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.114 12:05:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.114 12:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.114 12:05:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.114 12:05:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.114 12:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.114 "name": "raid_bdev1", 00:13:52.114 "uuid": "aab8a4c6-3aec-4703-a5cf-f07b8a4f791b", 00:13:52.114 "strip_size_kb": 0, 00:13:52.114 "state": "online", 00:13:52.114 "raid_level": "raid1", 00:13:52.114 "superblock": true, 00:13:52.114 "num_base_bdevs": 4, 00:13:52.114 "num_base_bdevs_discovered": 4, 00:13:52.114 "num_base_bdevs_operational": 4, 00:13:52.114 "base_bdevs_list": [ 00:13:52.114 { 00:13:52.114 "name": "BaseBdev1", 00:13:52.114 "uuid": "3deac68a-3ad2-5e83-8470-819c2101dafe", 00:13:52.114 "is_configured": true, 00:13:52.114 "data_offset": 2048, 00:13:52.114 "data_size": 63488 00:13:52.114 }, 00:13:52.114 { 00:13:52.114 "name": "BaseBdev2", 00:13:52.114 "uuid": "9a3cca31-c3ef-58e8-bebf-4d34245b2e7b", 00:13:52.114 "is_configured": true, 00:13:52.114 "data_offset": 2048, 00:13:52.114 "data_size": 63488 00:13:52.114 }, 00:13:52.114 { 00:13:52.114 "name": "BaseBdev3", 00:13:52.114 "uuid": "412dd4be-e10a-569e-b726-c521ca0f7da0", 00:13:52.114 "is_configured": true, 00:13:52.114 "data_offset": 2048, 00:13:52.114 "data_size": 63488 00:13:52.114 }, 00:13:52.114 { 00:13:52.114 "name": "BaseBdev4", 00:13:52.114 "uuid": "f2f13f59-dd73-5b29-b313-2f94bc75a45e", 00:13:52.114 "is_configured": true, 00:13:52.114 "data_offset": 2048, 00:13:52.114 "data_size": 63488 00:13:52.114 } 00:13:52.114 ] 00:13:52.114 }' 00:13:52.114 12:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.114 12:05:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.682 12:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:52.682 12:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:52.682 12:05:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.682 12:05:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.682 [2024-11-19 12:05:55.825334] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:52.682 12:05:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.682 12:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:52.682 12:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.682 12:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:52.682 12:05:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.682 12:05:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.682 12:05:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.682 12:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:52.682 12:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:52.682 12:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:52.682 12:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:52.682 12:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:52.682 12:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:52.682 12:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:52.683 12:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:52.683 12:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:52.683 12:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:52.683 12:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:52.683 12:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:52.683 12:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:52.683 12:05:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:52.942 [2024-11-19 12:05:56.104612] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:52.942 /dev/nbd0 00:13:52.942 12:05:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:52.942 12:05:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:52.942 12:05:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:52.942 12:05:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:52.942 12:05:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:52.942 12:05:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:52.942 12:05:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:52.942 12:05:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:52.942 12:05:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:52.942 12:05:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:52.942 12:05:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:52.942 1+0 records in 00:13:52.942 1+0 records out 00:13:52.942 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000415669 s, 9.9 MB/s 00:13:52.942 12:05:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:52.942 12:05:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:52.942 12:05:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:52.942 12:05:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:52.943 12:05:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:52.943 12:05:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:52.943 12:05:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:52.943 12:05:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:52.943 12:05:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:52.943 12:05:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:13:58.216 63488+0 records in 00:13:58.216 63488+0 records out 00:13:58.216 32505856 bytes (33 MB, 31 MiB) copied, 4.80835 s, 6.8 MB/s 00:13:58.216 12:06:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:58.216 12:06:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:58.216 12:06:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:58.216 12:06:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:58.216 12:06:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:58.216 12:06:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:58.216 12:06:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:58.216 12:06:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:58.216 [2024-11-19 12:06:01.185225] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:58.216 12:06:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:58.216 12:06:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:58.216 12:06:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:58.216 12:06:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:58.216 12:06:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:58.216 12:06:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:58.216 12:06:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:58.216 12:06:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:58.216 12:06:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.216 12:06:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.216 [2024-11-19 12:06:01.197299] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:58.216 12:06:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.216 12:06:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:58.216 12:06:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:58.216 12:06:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:58.216 12:06:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:58.216 12:06:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:58.216 12:06:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:58.216 12:06:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:58.216 12:06:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:58.216 12:06:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:58.216 12:06:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:58.216 12:06:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.216 12:06:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.216 12:06:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.216 12:06:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.216 12:06:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.216 12:06:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.216 "name": "raid_bdev1", 00:13:58.216 "uuid": "aab8a4c6-3aec-4703-a5cf-f07b8a4f791b", 00:13:58.216 "strip_size_kb": 0, 00:13:58.216 "state": "online", 00:13:58.216 "raid_level": "raid1", 00:13:58.216 "superblock": true, 00:13:58.216 "num_base_bdevs": 4, 00:13:58.216 "num_base_bdevs_discovered": 3, 00:13:58.216 "num_base_bdevs_operational": 3, 00:13:58.216 "base_bdevs_list": [ 00:13:58.216 { 00:13:58.216 "name": null, 00:13:58.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.216 "is_configured": false, 00:13:58.216 "data_offset": 0, 00:13:58.216 "data_size": 63488 00:13:58.216 }, 00:13:58.216 { 00:13:58.216 "name": "BaseBdev2", 00:13:58.216 "uuid": "9a3cca31-c3ef-58e8-bebf-4d34245b2e7b", 00:13:58.216 "is_configured": true, 00:13:58.216 "data_offset": 2048, 00:13:58.216 "data_size": 63488 00:13:58.216 }, 00:13:58.216 { 00:13:58.216 "name": "BaseBdev3", 00:13:58.216 "uuid": "412dd4be-e10a-569e-b726-c521ca0f7da0", 00:13:58.216 "is_configured": true, 00:13:58.216 "data_offset": 2048, 00:13:58.216 "data_size": 63488 00:13:58.216 }, 00:13:58.216 { 00:13:58.216 "name": "BaseBdev4", 00:13:58.216 "uuid": "f2f13f59-dd73-5b29-b313-2f94bc75a45e", 00:13:58.216 "is_configured": true, 00:13:58.216 "data_offset": 2048, 00:13:58.216 "data_size": 63488 00:13:58.216 } 00:13:58.216 ] 00:13:58.216 }' 00:13:58.216 12:06:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.216 12:06:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.475 12:06:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:58.475 12:06:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.475 12:06:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.475 [2024-11-19 12:06:01.652524] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:58.475 [2024-11-19 12:06:01.666195] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:13:58.475 12:06:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.475 12:06:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:58.475 [2024-11-19 12:06:01.668027] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:59.413 12:06:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:59.413 12:06:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:59.413 12:06:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:59.413 12:06:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:59.413 12:06:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:59.413 12:06:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.413 12:06:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.413 12:06:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.414 12:06:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.414 12:06:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.414 12:06:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:59.414 "name": "raid_bdev1", 00:13:59.414 "uuid": "aab8a4c6-3aec-4703-a5cf-f07b8a4f791b", 00:13:59.414 "strip_size_kb": 0, 00:13:59.414 "state": "online", 00:13:59.414 "raid_level": "raid1", 00:13:59.414 "superblock": true, 00:13:59.414 "num_base_bdevs": 4, 00:13:59.414 "num_base_bdevs_discovered": 4, 00:13:59.414 "num_base_bdevs_operational": 4, 00:13:59.414 "process": { 00:13:59.414 "type": "rebuild", 00:13:59.414 "target": "spare", 00:13:59.414 "progress": { 00:13:59.414 "blocks": 20480, 00:13:59.414 "percent": 32 00:13:59.414 } 00:13:59.414 }, 00:13:59.414 "base_bdevs_list": [ 00:13:59.414 { 00:13:59.414 "name": "spare", 00:13:59.414 "uuid": "4e2a95e4-a2e9-567d-bd8f-8f951f4aa60e", 00:13:59.414 "is_configured": true, 00:13:59.414 "data_offset": 2048, 00:13:59.414 "data_size": 63488 00:13:59.414 }, 00:13:59.414 { 00:13:59.414 "name": "BaseBdev2", 00:13:59.414 "uuid": "9a3cca31-c3ef-58e8-bebf-4d34245b2e7b", 00:13:59.414 "is_configured": true, 00:13:59.414 "data_offset": 2048, 00:13:59.414 "data_size": 63488 00:13:59.414 }, 00:13:59.414 { 00:13:59.414 "name": "BaseBdev3", 00:13:59.414 "uuid": "412dd4be-e10a-569e-b726-c521ca0f7da0", 00:13:59.414 "is_configured": true, 00:13:59.414 "data_offset": 2048, 00:13:59.414 "data_size": 63488 00:13:59.414 }, 00:13:59.414 { 00:13:59.414 "name": "BaseBdev4", 00:13:59.414 "uuid": "f2f13f59-dd73-5b29-b313-2f94bc75a45e", 00:13:59.414 "is_configured": true, 00:13:59.414 "data_offset": 2048, 00:13:59.414 "data_size": 63488 00:13:59.414 } 00:13:59.414 ] 00:13:59.414 }' 00:13:59.414 12:06:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:59.414 12:06:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:59.414 12:06:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:59.672 12:06:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:59.673 12:06:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:59.673 12:06:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.673 12:06:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.673 [2024-11-19 12:06:02.835218] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:59.673 [2024-11-19 12:06:02.872746] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:59.673 [2024-11-19 12:06:02.872819] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:59.673 [2024-11-19 12:06:02.872835] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:59.673 [2024-11-19 12:06:02.872844] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:59.673 12:06:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.673 12:06:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:59.673 12:06:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:59.673 12:06:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:59.673 12:06:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:59.673 12:06:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:59.673 12:06:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:59.673 12:06:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:59.673 12:06:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:59.673 12:06:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:59.673 12:06:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:59.673 12:06:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.673 12:06:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.673 12:06:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.673 12:06:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.673 12:06:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.673 12:06:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:59.673 "name": "raid_bdev1", 00:13:59.673 "uuid": "aab8a4c6-3aec-4703-a5cf-f07b8a4f791b", 00:13:59.673 "strip_size_kb": 0, 00:13:59.673 "state": "online", 00:13:59.673 "raid_level": "raid1", 00:13:59.673 "superblock": true, 00:13:59.673 "num_base_bdevs": 4, 00:13:59.673 "num_base_bdevs_discovered": 3, 00:13:59.673 "num_base_bdevs_operational": 3, 00:13:59.673 "base_bdevs_list": [ 00:13:59.673 { 00:13:59.673 "name": null, 00:13:59.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.673 "is_configured": false, 00:13:59.673 "data_offset": 0, 00:13:59.673 "data_size": 63488 00:13:59.673 }, 00:13:59.673 { 00:13:59.673 "name": "BaseBdev2", 00:13:59.673 "uuid": "9a3cca31-c3ef-58e8-bebf-4d34245b2e7b", 00:13:59.673 "is_configured": true, 00:13:59.673 "data_offset": 2048, 00:13:59.673 "data_size": 63488 00:13:59.673 }, 00:13:59.673 { 00:13:59.673 "name": "BaseBdev3", 00:13:59.673 "uuid": "412dd4be-e10a-569e-b726-c521ca0f7da0", 00:13:59.673 "is_configured": true, 00:13:59.673 "data_offset": 2048, 00:13:59.673 "data_size": 63488 00:13:59.673 }, 00:13:59.673 { 00:13:59.673 "name": "BaseBdev4", 00:13:59.673 "uuid": "f2f13f59-dd73-5b29-b313-2f94bc75a45e", 00:13:59.673 "is_configured": true, 00:13:59.673 "data_offset": 2048, 00:13:59.673 "data_size": 63488 00:13:59.673 } 00:13:59.673 ] 00:13:59.673 }' 00:13:59.673 12:06:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:59.673 12:06:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.241 12:06:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:00.241 12:06:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:00.241 12:06:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:00.241 12:06:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:00.241 12:06:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:00.241 12:06:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.241 12:06:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.241 12:06:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.241 12:06:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.241 12:06:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.241 12:06:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:00.241 "name": "raid_bdev1", 00:14:00.241 "uuid": "aab8a4c6-3aec-4703-a5cf-f07b8a4f791b", 00:14:00.241 "strip_size_kb": 0, 00:14:00.241 "state": "online", 00:14:00.241 "raid_level": "raid1", 00:14:00.241 "superblock": true, 00:14:00.241 "num_base_bdevs": 4, 00:14:00.241 "num_base_bdevs_discovered": 3, 00:14:00.241 "num_base_bdevs_operational": 3, 00:14:00.241 "base_bdevs_list": [ 00:14:00.241 { 00:14:00.241 "name": null, 00:14:00.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.241 "is_configured": false, 00:14:00.241 "data_offset": 0, 00:14:00.241 "data_size": 63488 00:14:00.241 }, 00:14:00.241 { 00:14:00.241 "name": "BaseBdev2", 00:14:00.241 "uuid": "9a3cca31-c3ef-58e8-bebf-4d34245b2e7b", 00:14:00.241 "is_configured": true, 00:14:00.241 "data_offset": 2048, 00:14:00.241 "data_size": 63488 00:14:00.241 }, 00:14:00.241 { 00:14:00.241 "name": "BaseBdev3", 00:14:00.241 "uuid": "412dd4be-e10a-569e-b726-c521ca0f7da0", 00:14:00.241 "is_configured": true, 00:14:00.241 "data_offset": 2048, 00:14:00.241 "data_size": 63488 00:14:00.241 }, 00:14:00.241 { 00:14:00.241 "name": "BaseBdev4", 00:14:00.241 "uuid": "f2f13f59-dd73-5b29-b313-2f94bc75a45e", 00:14:00.241 "is_configured": true, 00:14:00.241 "data_offset": 2048, 00:14:00.241 "data_size": 63488 00:14:00.241 } 00:14:00.241 ] 00:14:00.241 }' 00:14:00.241 12:06:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:00.242 12:06:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:00.242 12:06:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:00.242 12:06:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:00.242 12:06:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:00.242 12:06:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.242 12:06:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.242 [2024-11-19 12:06:03.480652] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:00.242 [2024-11-19 12:06:03.494799] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:14:00.242 12:06:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.242 12:06:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:00.242 [2024-11-19 12:06:03.496663] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:01.253 12:06:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:01.253 12:06:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:01.253 12:06:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:01.253 12:06:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:01.253 12:06:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:01.253 12:06:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.253 12:06:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.253 12:06:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.253 12:06:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.253 12:06:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.253 12:06:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:01.253 "name": "raid_bdev1", 00:14:01.253 "uuid": "aab8a4c6-3aec-4703-a5cf-f07b8a4f791b", 00:14:01.253 "strip_size_kb": 0, 00:14:01.253 "state": "online", 00:14:01.253 "raid_level": "raid1", 00:14:01.253 "superblock": true, 00:14:01.253 "num_base_bdevs": 4, 00:14:01.253 "num_base_bdevs_discovered": 4, 00:14:01.253 "num_base_bdevs_operational": 4, 00:14:01.253 "process": { 00:14:01.253 "type": "rebuild", 00:14:01.253 "target": "spare", 00:14:01.253 "progress": { 00:14:01.254 "blocks": 20480, 00:14:01.254 "percent": 32 00:14:01.254 } 00:14:01.254 }, 00:14:01.254 "base_bdevs_list": [ 00:14:01.254 { 00:14:01.254 "name": "spare", 00:14:01.254 "uuid": "4e2a95e4-a2e9-567d-bd8f-8f951f4aa60e", 00:14:01.254 "is_configured": true, 00:14:01.254 "data_offset": 2048, 00:14:01.254 "data_size": 63488 00:14:01.254 }, 00:14:01.254 { 00:14:01.254 "name": "BaseBdev2", 00:14:01.254 "uuid": "9a3cca31-c3ef-58e8-bebf-4d34245b2e7b", 00:14:01.254 "is_configured": true, 00:14:01.254 "data_offset": 2048, 00:14:01.254 "data_size": 63488 00:14:01.254 }, 00:14:01.254 { 00:14:01.254 "name": "BaseBdev3", 00:14:01.254 "uuid": "412dd4be-e10a-569e-b726-c521ca0f7da0", 00:14:01.254 "is_configured": true, 00:14:01.254 "data_offset": 2048, 00:14:01.254 "data_size": 63488 00:14:01.254 }, 00:14:01.254 { 00:14:01.254 "name": "BaseBdev4", 00:14:01.254 "uuid": "f2f13f59-dd73-5b29-b313-2f94bc75a45e", 00:14:01.254 "is_configured": true, 00:14:01.254 "data_offset": 2048, 00:14:01.254 "data_size": 63488 00:14:01.254 } 00:14:01.254 ] 00:14:01.254 }' 00:14:01.254 12:06:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:01.254 12:06:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:01.254 12:06:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:01.526 12:06:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:01.526 12:06:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:01.526 12:06:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:01.526 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:01.526 12:06:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:01.526 12:06:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:01.526 12:06:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:01.526 12:06:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:01.526 12:06:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.526 12:06:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.526 [2024-11-19 12:06:04.664442] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:01.526 [2024-11-19 12:06:04.801285] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:14:01.526 12:06:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.527 12:06:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:01.527 12:06:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:01.527 12:06:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:01.527 12:06:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:01.527 12:06:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:01.527 12:06:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:01.527 12:06:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:01.527 12:06:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.527 12:06:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.527 12:06:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.527 12:06:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.527 12:06:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.527 12:06:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:01.527 "name": "raid_bdev1", 00:14:01.527 "uuid": "aab8a4c6-3aec-4703-a5cf-f07b8a4f791b", 00:14:01.527 "strip_size_kb": 0, 00:14:01.527 "state": "online", 00:14:01.527 "raid_level": "raid1", 00:14:01.527 "superblock": true, 00:14:01.527 "num_base_bdevs": 4, 00:14:01.527 "num_base_bdevs_discovered": 3, 00:14:01.527 "num_base_bdevs_operational": 3, 00:14:01.527 "process": { 00:14:01.527 "type": "rebuild", 00:14:01.527 "target": "spare", 00:14:01.527 "progress": { 00:14:01.527 "blocks": 24576, 00:14:01.527 "percent": 38 00:14:01.527 } 00:14:01.527 }, 00:14:01.527 "base_bdevs_list": [ 00:14:01.527 { 00:14:01.527 "name": "spare", 00:14:01.527 "uuid": "4e2a95e4-a2e9-567d-bd8f-8f951f4aa60e", 00:14:01.527 "is_configured": true, 00:14:01.527 "data_offset": 2048, 00:14:01.527 "data_size": 63488 00:14:01.527 }, 00:14:01.527 { 00:14:01.527 "name": null, 00:14:01.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.527 "is_configured": false, 00:14:01.527 "data_offset": 0, 00:14:01.527 "data_size": 63488 00:14:01.527 }, 00:14:01.527 { 00:14:01.527 "name": "BaseBdev3", 00:14:01.527 "uuid": "412dd4be-e10a-569e-b726-c521ca0f7da0", 00:14:01.527 "is_configured": true, 00:14:01.527 "data_offset": 2048, 00:14:01.527 "data_size": 63488 00:14:01.527 }, 00:14:01.527 { 00:14:01.527 "name": "BaseBdev4", 00:14:01.527 "uuid": "f2f13f59-dd73-5b29-b313-2f94bc75a45e", 00:14:01.527 "is_configured": true, 00:14:01.527 "data_offset": 2048, 00:14:01.527 "data_size": 63488 00:14:01.527 } 00:14:01.527 ] 00:14:01.527 }' 00:14:01.527 12:06:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:01.786 12:06:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:01.786 12:06:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:01.786 12:06:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:01.786 12:06:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=457 00:14:01.786 12:06:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:01.786 12:06:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:01.786 12:06:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:01.786 12:06:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:01.786 12:06:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:01.786 12:06:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:01.786 12:06:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.786 12:06:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.786 12:06:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.786 12:06:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.786 12:06:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.786 12:06:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:01.786 "name": "raid_bdev1", 00:14:01.786 "uuid": "aab8a4c6-3aec-4703-a5cf-f07b8a4f791b", 00:14:01.786 "strip_size_kb": 0, 00:14:01.786 "state": "online", 00:14:01.786 "raid_level": "raid1", 00:14:01.786 "superblock": true, 00:14:01.786 "num_base_bdevs": 4, 00:14:01.786 "num_base_bdevs_discovered": 3, 00:14:01.786 "num_base_bdevs_operational": 3, 00:14:01.786 "process": { 00:14:01.786 "type": "rebuild", 00:14:01.786 "target": "spare", 00:14:01.786 "progress": { 00:14:01.786 "blocks": 26624, 00:14:01.786 "percent": 41 00:14:01.786 } 00:14:01.786 }, 00:14:01.786 "base_bdevs_list": [ 00:14:01.786 { 00:14:01.786 "name": "spare", 00:14:01.786 "uuid": "4e2a95e4-a2e9-567d-bd8f-8f951f4aa60e", 00:14:01.786 "is_configured": true, 00:14:01.786 "data_offset": 2048, 00:14:01.786 "data_size": 63488 00:14:01.786 }, 00:14:01.786 { 00:14:01.786 "name": null, 00:14:01.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.786 "is_configured": false, 00:14:01.786 "data_offset": 0, 00:14:01.786 "data_size": 63488 00:14:01.786 }, 00:14:01.786 { 00:14:01.786 "name": "BaseBdev3", 00:14:01.786 "uuid": "412dd4be-e10a-569e-b726-c521ca0f7da0", 00:14:01.786 "is_configured": true, 00:14:01.786 "data_offset": 2048, 00:14:01.786 "data_size": 63488 00:14:01.786 }, 00:14:01.786 { 00:14:01.786 "name": "BaseBdev4", 00:14:01.786 "uuid": "f2f13f59-dd73-5b29-b313-2f94bc75a45e", 00:14:01.786 "is_configured": true, 00:14:01.786 "data_offset": 2048, 00:14:01.786 "data_size": 63488 00:14:01.786 } 00:14:01.786 ] 00:14:01.786 }' 00:14:01.786 12:06:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:01.786 12:06:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:01.786 12:06:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:01.786 12:06:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:01.786 12:06:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:02.723 12:06:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:02.723 12:06:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:02.723 12:06:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:02.723 12:06:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:02.723 12:06:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:02.723 12:06:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:02.983 12:06:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.983 12:06:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.983 12:06:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.983 12:06:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.983 12:06:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.983 12:06:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:02.983 "name": "raid_bdev1", 00:14:02.983 "uuid": "aab8a4c6-3aec-4703-a5cf-f07b8a4f791b", 00:14:02.983 "strip_size_kb": 0, 00:14:02.983 "state": "online", 00:14:02.983 "raid_level": "raid1", 00:14:02.983 "superblock": true, 00:14:02.983 "num_base_bdevs": 4, 00:14:02.983 "num_base_bdevs_discovered": 3, 00:14:02.983 "num_base_bdevs_operational": 3, 00:14:02.983 "process": { 00:14:02.983 "type": "rebuild", 00:14:02.983 "target": "spare", 00:14:02.983 "progress": { 00:14:02.983 "blocks": 51200, 00:14:02.983 "percent": 80 00:14:02.983 } 00:14:02.983 }, 00:14:02.983 "base_bdevs_list": [ 00:14:02.983 { 00:14:02.983 "name": "spare", 00:14:02.983 "uuid": "4e2a95e4-a2e9-567d-bd8f-8f951f4aa60e", 00:14:02.983 "is_configured": true, 00:14:02.983 "data_offset": 2048, 00:14:02.983 "data_size": 63488 00:14:02.983 }, 00:14:02.983 { 00:14:02.983 "name": null, 00:14:02.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.983 "is_configured": false, 00:14:02.983 "data_offset": 0, 00:14:02.983 "data_size": 63488 00:14:02.983 }, 00:14:02.983 { 00:14:02.983 "name": "BaseBdev3", 00:14:02.983 "uuid": "412dd4be-e10a-569e-b726-c521ca0f7da0", 00:14:02.983 "is_configured": true, 00:14:02.983 "data_offset": 2048, 00:14:02.983 "data_size": 63488 00:14:02.983 }, 00:14:02.983 { 00:14:02.983 "name": "BaseBdev4", 00:14:02.983 "uuid": "f2f13f59-dd73-5b29-b313-2f94bc75a45e", 00:14:02.983 "is_configured": true, 00:14:02.983 "data_offset": 2048, 00:14:02.983 "data_size": 63488 00:14:02.983 } 00:14:02.983 ] 00:14:02.983 }' 00:14:02.983 12:06:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:02.983 12:06:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:02.983 12:06:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:02.983 12:06:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:02.983 12:06:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:03.551 [2024-11-19 12:06:06.709035] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:03.551 [2024-11-19 12:06:06.709168] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:03.551 [2024-11-19 12:06:06.709315] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:04.119 12:06:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:04.119 12:06:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:04.119 12:06:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:04.119 12:06:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:04.119 12:06:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:04.119 12:06:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:04.119 12:06:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.119 12:06:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.119 12:06:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.119 12:06:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.119 12:06:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.119 12:06:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:04.119 "name": "raid_bdev1", 00:14:04.119 "uuid": "aab8a4c6-3aec-4703-a5cf-f07b8a4f791b", 00:14:04.119 "strip_size_kb": 0, 00:14:04.119 "state": "online", 00:14:04.119 "raid_level": "raid1", 00:14:04.119 "superblock": true, 00:14:04.119 "num_base_bdevs": 4, 00:14:04.119 "num_base_bdevs_discovered": 3, 00:14:04.119 "num_base_bdevs_operational": 3, 00:14:04.119 "base_bdevs_list": [ 00:14:04.119 { 00:14:04.119 "name": "spare", 00:14:04.119 "uuid": "4e2a95e4-a2e9-567d-bd8f-8f951f4aa60e", 00:14:04.119 "is_configured": true, 00:14:04.119 "data_offset": 2048, 00:14:04.119 "data_size": 63488 00:14:04.119 }, 00:14:04.119 { 00:14:04.119 "name": null, 00:14:04.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.119 "is_configured": false, 00:14:04.119 "data_offset": 0, 00:14:04.119 "data_size": 63488 00:14:04.119 }, 00:14:04.119 { 00:14:04.119 "name": "BaseBdev3", 00:14:04.119 "uuid": "412dd4be-e10a-569e-b726-c521ca0f7da0", 00:14:04.119 "is_configured": true, 00:14:04.119 "data_offset": 2048, 00:14:04.119 "data_size": 63488 00:14:04.119 }, 00:14:04.119 { 00:14:04.119 "name": "BaseBdev4", 00:14:04.119 "uuid": "f2f13f59-dd73-5b29-b313-2f94bc75a45e", 00:14:04.119 "is_configured": true, 00:14:04.119 "data_offset": 2048, 00:14:04.119 "data_size": 63488 00:14:04.119 } 00:14:04.119 ] 00:14:04.119 }' 00:14:04.119 12:06:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:04.119 12:06:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:04.119 12:06:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:04.119 12:06:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:04.119 12:06:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:14:04.119 12:06:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:04.119 12:06:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:04.119 12:06:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:04.119 12:06:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:04.119 12:06:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:04.119 12:06:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.119 12:06:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.119 12:06:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.119 12:06:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.119 12:06:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.119 12:06:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:04.119 "name": "raid_bdev1", 00:14:04.119 "uuid": "aab8a4c6-3aec-4703-a5cf-f07b8a4f791b", 00:14:04.119 "strip_size_kb": 0, 00:14:04.119 "state": "online", 00:14:04.119 "raid_level": "raid1", 00:14:04.119 "superblock": true, 00:14:04.119 "num_base_bdevs": 4, 00:14:04.119 "num_base_bdevs_discovered": 3, 00:14:04.119 "num_base_bdevs_operational": 3, 00:14:04.119 "base_bdevs_list": [ 00:14:04.119 { 00:14:04.119 "name": "spare", 00:14:04.119 "uuid": "4e2a95e4-a2e9-567d-bd8f-8f951f4aa60e", 00:14:04.119 "is_configured": true, 00:14:04.119 "data_offset": 2048, 00:14:04.119 "data_size": 63488 00:14:04.119 }, 00:14:04.119 { 00:14:04.119 "name": null, 00:14:04.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.119 "is_configured": false, 00:14:04.119 "data_offset": 0, 00:14:04.119 "data_size": 63488 00:14:04.119 }, 00:14:04.119 { 00:14:04.119 "name": "BaseBdev3", 00:14:04.119 "uuid": "412dd4be-e10a-569e-b726-c521ca0f7da0", 00:14:04.119 "is_configured": true, 00:14:04.119 "data_offset": 2048, 00:14:04.119 "data_size": 63488 00:14:04.119 }, 00:14:04.119 { 00:14:04.119 "name": "BaseBdev4", 00:14:04.119 "uuid": "f2f13f59-dd73-5b29-b313-2f94bc75a45e", 00:14:04.119 "is_configured": true, 00:14:04.119 "data_offset": 2048, 00:14:04.119 "data_size": 63488 00:14:04.119 } 00:14:04.119 ] 00:14:04.119 }' 00:14:04.119 12:06:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:04.119 12:06:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:04.119 12:06:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:04.377 12:06:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:04.377 12:06:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:04.377 12:06:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:04.377 12:06:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:04.378 12:06:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:04.378 12:06:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:04.378 12:06:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:04.378 12:06:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.378 12:06:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.378 12:06:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.378 12:06:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.378 12:06:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.378 12:06:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.378 12:06:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.378 12:06:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.378 12:06:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.378 12:06:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:04.378 "name": "raid_bdev1", 00:14:04.378 "uuid": "aab8a4c6-3aec-4703-a5cf-f07b8a4f791b", 00:14:04.378 "strip_size_kb": 0, 00:14:04.378 "state": "online", 00:14:04.378 "raid_level": "raid1", 00:14:04.378 "superblock": true, 00:14:04.378 "num_base_bdevs": 4, 00:14:04.378 "num_base_bdevs_discovered": 3, 00:14:04.378 "num_base_bdevs_operational": 3, 00:14:04.378 "base_bdevs_list": [ 00:14:04.378 { 00:14:04.378 "name": "spare", 00:14:04.378 "uuid": "4e2a95e4-a2e9-567d-bd8f-8f951f4aa60e", 00:14:04.378 "is_configured": true, 00:14:04.378 "data_offset": 2048, 00:14:04.378 "data_size": 63488 00:14:04.378 }, 00:14:04.378 { 00:14:04.378 "name": null, 00:14:04.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.378 "is_configured": false, 00:14:04.378 "data_offset": 0, 00:14:04.378 "data_size": 63488 00:14:04.378 }, 00:14:04.378 { 00:14:04.378 "name": "BaseBdev3", 00:14:04.378 "uuid": "412dd4be-e10a-569e-b726-c521ca0f7da0", 00:14:04.378 "is_configured": true, 00:14:04.378 "data_offset": 2048, 00:14:04.378 "data_size": 63488 00:14:04.378 }, 00:14:04.378 { 00:14:04.378 "name": "BaseBdev4", 00:14:04.378 "uuid": "f2f13f59-dd73-5b29-b313-2f94bc75a45e", 00:14:04.378 "is_configured": true, 00:14:04.378 "data_offset": 2048, 00:14:04.378 "data_size": 63488 00:14:04.378 } 00:14:04.378 ] 00:14:04.378 }' 00:14:04.378 12:06:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:04.378 12:06:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.637 12:06:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:04.637 12:06:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.637 12:06:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.637 [2024-11-19 12:06:07.987539] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:04.637 [2024-11-19 12:06:07.987613] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:04.637 [2024-11-19 12:06:07.987718] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:04.637 [2024-11-19 12:06:07.987812] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:04.637 [2024-11-19 12:06:07.987855] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:04.637 12:06:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.637 12:06:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:14:04.637 12:06:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.637 12:06:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.637 12:06:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.637 12:06:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.896 12:06:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:04.896 12:06:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:04.896 12:06:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:04.896 12:06:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:04.896 12:06:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:04.896 12:06:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:04.896 12:06:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:04.896 12:06:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:04.896 12:06:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:04.896 12:06:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:04.896 12:06:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:04.896 12:06:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:04.896 12:06:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:04.896 /dev/nbd0 00:14:04.896 12:06:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:04.896 12:06:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:04.896 12:06:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:04.896 12:06:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:04.896 12:06:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:04.896 12:06:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:04.896 12:06:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:05.156 12:06:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:05.156 12:06:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:05.156 12:06:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:05.156 12:06:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:05.156 1+0 records in 00:14:05.156 1+0 records out 00:14:05.156 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000461228 s, 8.9 MB/s 00:14:05.156 12:06:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:05.156 12:06:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:05.156 12:06:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:05.156 12:06:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:05.156 12:06:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:05.156 12:06:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:05.156 12:06:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:05.156 12:06:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:05.156 /dev/nbd1 00:14:05.156 12:06:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:05.156 12:06:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:05.156 12:06:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:05.156 12:06:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:05.156 12:06:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:05.156 12:06:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:05.156 12:06:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:05.156 12:06:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:05.156 12:06:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:05.156 12:06:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:05.156 12:06:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:05.156 1+0 records in 00:14:05.156 1+0 records out 00:14:05.156 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000269052 s, 15.2 MB/s 00:14:05.156 12:06:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:05.156 12:06:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:05.156 12:06:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:05.156 12:06:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:05.156 12:06:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:05.156 12:06:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:05.156 12:06:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:05.156 12:06:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:05.416 12:06:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:05.416 12:06:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:05.416 12:06:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:05.416 12:06:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:05.416 12:06:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:05.416 12:06:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:05.416 12:06:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:05.676 12:06:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:05.676 12:06:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:05.676 12:06:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:05.676 12:06:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:05.676 12:06:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:05.676 12:06:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:05.676 12:06:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:05.676 12:06:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:05.677 12:06:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:05.677 12:06:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:05.948 12:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:05.948 12:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:05.948 12:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:05.948 12:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:05.948 12:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:05.948 12:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:05.948 12:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:05.948 12:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:05.948 12:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:05.948 12:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:05.948 12:06:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.948 12:06:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.948 12:06:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.948 12:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:05.948 12:06:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.948 12:06:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.948 [2024-11-19 12:06:09.098173] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:05.948 [2024-11-19 12:06:09.098229] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:05.948 [2024-11-19 12:06:09.098252] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:14:05.948 [2024-11-19 12:06:09.098262] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:05.948 [2024-11-19 12:06:09.100446] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:05.948 [2024-11-19 12:06:09.100486] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:05.948 [2024-11-19 12:06:09.100581] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:05.948 [2024-11-19 12:06:09.100632] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:05.948 [2024-11-19 12:06:09.100790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:05.948 [2024-11-19 12:06:09.100874] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:05.948 spare 00:14:05.948 12:06:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.948 12:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:05.948 12:06:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.948 12:06:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.948 [2024-11-19 12:06:09.200776] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:05.948 [2024-11-19 12:06:09.200839] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:05.948 [2024-11-19 12:06:09.201167] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:14:05.948 [2024-11-19 12:06:09.201344] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:05.948 [2024-11-19 12:06:09.201357] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:05.948 [2024-11-19 12:06:09.201506] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:05.948 12:06:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.948 12:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:05.948 12:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:05.948 12:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:05.948 12:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:05.948 12:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:05.948 12:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:05.948 12:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.948 12:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.948 12:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.948 12:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.948 12:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.948 12:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.948 12:06:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.948 12:06:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.948 12:06:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.948 12:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.948 "name": "raid_bdev1", 00:14:05.948 "uuid": "aab8a4c6-3aec-4703-a5cf-f07b8a4f791b", 00:14:05.948 "strip_size_kb": 0, 00:14:05.948 "state": "online", 00:14:05.948 "raid_level": "raid1", 00:14:05.948 "superblock": true, 00:14:05.948 "num_base_bdevs": 4, 00:14:05.948 "num_base_bdevs_discovered": 3, 00:14:05.948 "num_base_bdevs_operational": 3, 00:14:05.948 "base_bdevs_list": [ 00:14:05.948 { 00:14:05.948 "name": "spare", 00:14:05.948 "uuid": "4e2a95e4-a2e9-567d-bd8f-8f951f4aa60e", 00:14:05.948 "is_configured": true, 00:14:05.948 "data_offset": 2048, 00:14:05.948 "data_size": 63488 00:14:05.948 }, 00:14:05.948 { 00:14:05.948 "name": null, 00:14:05.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.948 "is_configured": false, 00:14:05.948 "data_offset": 2048, 00:14:05.948 "data_size": 63488 00:14:05.948 }, 00:14:05.948 { 00:14:05.948 "name": "BaseBdev3", 00:14:05.948 "uuid": "412dd4be-e10a-569e-b726-c521ca0f7da0", 00:14:05.948 "is_configured": true, 00:14:05.948 "data_offset": 2048, 00:14:05.948 "data_size": 63488 00:14:05.948 }, 00:14:05.948 { 00:14:05.948 "name": "BaseBdev4", 00:14:05.948 "uuid": "f2f13f59-dd73-5b29-b313-2f94bc75a45e", 00:14:05.948 "is_configured": true, 00:14:05.948 "data_offset": 2048, 00:14:05.948 "data_size": 63488 00:14:05.948 } 00:14:05.948 ] 00:14:05.948 }' 00:14:05.948 12:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.948 12:06:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.523 12:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:06.523 12:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:06.523 12:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:06.523 12:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:06.523 12:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:06.523 12:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.523 12:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.523 12:06:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.523 12:06:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.523 12:06:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.523 12:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:06.523 "name": "raid_bdev1", 00:14:06.523 "uuid": "aab8a4c6-3aec-4703-a5cf-f07b8a4f791b", 00:14:06.523 "strip_size_kb": 0, 00:14:06.523 "state": "online", 00:14:06.523 "raid_level": "raid1", 00:14:06.523 "superblock": true, 00:14:06.523 "num_base_bdevs": 4, 00:14:06.523 "num_base_bdevs_discovered": 3, 00:14:06.523 "num_base_bdevs_operational": 3, 00:14:06.523 "base_bdevs_list": [ 00:14:06.523 { 00:14:06.523 "name": "spare", 00:14:06.523 "uuid": "4e2a95e4-a2e9-567d-bd8f-8f951f4aa60e", 00:14:06.523 "is_configured": true, 00:14:06.523 "data_offset": 2048, 00:14:06.523 "data_size": 63488 00:14:06.523 }, 00:14:06.523 { 00:14:06.523 "name": null, 00:14:06.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.523 "is_configured": false, 00:14:06.523 "data_offset": 2048, 00:14:06.523 "data_size": 63488 00:14:06.523 }, 00:14:06.523 { 00:14:06.523 "name": "BaseBdev3", 00:14:06.523 "uuid": "412dd4be-e10a-569e-b726-c521ca0f7da0", 00:14:06.523 "is_configured": true, 00:14:06.523 "data_offset": 2048, 00:14:06.523 "data_size": 63488 00:14:06.523 }, 00:14:06.523 { 00:14:06.523 "name": "BaseBdev4", 00:14:06.523 "uuid": "f2f13f59-dd73-5b29-b313-2f94bc75a45e", 00:14:06.523 "is_configured": true, 00:14:06.523 "data_offset": 2048, 00:14:06.523 "data_size": 63488 00:14:06.523 } 00:14:06.523 ] 00:14:06.523 }' 00:14:06.523 12:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:06.523 12:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:06.523 12:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:06.523 12:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:06.523 12:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.523 12:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:06.523 12:06:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.523 12:06:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.523 12:06:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.523 12:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:06.523 12:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:06.523 12:06:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.523 12:06:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.523 [2024-11-19 12:06:09.805018] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:06.523 12:06:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.523 12:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:06.523 12:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:06.523 12:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:06.523 12:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:06.523 12:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:06.523 12:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:06.523 12:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.523 12:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.523 12:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.523 12:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.523 12:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.523 12:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.523 12:06:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.523 12:06:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.523 12:06:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.523 12:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.523 "name": "raid_bdev1", 00:14:06.523 "uuid": "aab8a4c6-3aec-4703-a5cf-f07b8a4f791b", 00:14:06.523 "strip_size_kb": 0, 00:14:06.523 "state": "online", 00:14:06.523 "raid_level": "raid1", 00:14:06.523 "superblock": true, 00:14:06.523 "num_base_bdevs": 4, 00:14:06.523 "num_base_bdevs_discovered": 2, 00:14:06.523 "num_base_bdevs_operational": 2, 00:14:06.523 "base_bdevs_list": [ 00:14:06.523 { 00:14:06.523 "name": null, 00:14:06.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.523 "is_configured": false, 00:14:06.523 "data_offset": 0, 00:14:06.523 "data_size": 63488 00:14:06.523 }, 00:14:06.523 { 00:14:06.523 "name": null, 00:14:06.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.523 "is_configured": false, 00:14:06.523 "data_offset": 2048, 00:14:06.523 "data_size": 63488 00:14:06.523 }, 00:14:06.523 { 00:14:06.523 "name": "BaseBdev3", 00:14:06.523 "uuid": "412dd4be-e10a-569e-b726-c521ca0f7da0", 00:14:06.523 "is_configured": true, 00:14:06.523 "data_offset": 2048, 00:14:06.523 "data_size": 63488 00:14:06.523 }, 00:14:06.523 { 00:14:06.523 "name": "BaseBdev4", 00:14:06.523 "uuid": "f2f13f59-dd73-5b29-b313-2f94bc75a45e", 00:14:06.523 "is_configured": true, 00:14:06.523 "data_offset": 2048, 00:14:06.523 "data_size": 63488 00:14:06.523 } 00:14:06.523 ] 00:14:06.523 }' 00:14:06.523 12:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.523 12:06:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.092 12:06:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:07.092 12:06:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.092 12:06:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.093 [2024-11-19 12:06:10.288174] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:07.093 [2024-11-19 12:06:10.288412] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:07.093 [2024-11-19 12:06:10.288475] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:07.093 [2024-11-19 12:06:10.288543] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:07.093 [2024-11-19 12:06:10.302862] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:14:07.093 12:06:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.093 12:06:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:07.093 [2024-11-19 12:06:10.304781] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:08.031 12:06:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:08.031 12:06:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:08.031 12:06:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:08.031 12:06:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:08.031 12:06:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:08.031 12:06:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.031 12:06:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.031 12:06:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.031 12:06:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.031 12:06:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.031 12:06:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:08.031 "name": "raid_bdev1", 00:14:08.031 "uuid": "aab8a4c6-3aec-4703-a5cf-f07b8a4f791b", 00:14:08.031 "strip_size_kb": 0, 00:14:08.031 "state": "online", 00:14:08.031 "raid_level": "raid1", 00:14:08.031 "superblock": true, 00:14:08.031 "num_base_bdevs": 4, 00:14:08.031 "num_base_bdevs_discovered": 3, 00:14:08.031 "num_base_bdevs_operational": 3, 00:14:08.031 "process": { 00:14:08.031 "type": "rebuild", 00:14:08.031 "target": "spare", 00:14:08.031 "progress": { 00:14:08.031 "blocks": 20480, 00:14:08.031 "percent": 32 00:14:08.031 } 00:14:08.031 }, 00:14:08.031 "base_bdevs_list": [ 00:14:08.031 { 00:14:08.031 "name": "spare", 00:14:08.031 "uuid": "4e2a95e4-a2e9-567d-bd8f-8f951f4aa60e", 00:14:08.031 "is_configured": true, 00:14:08.031 "data_offset": 2048, 00:14:08.031 "data_size": 63488 00:14:08.031 }, 00:14:08.031 { 00:14:08.031 "name": null, 00:14:08.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.032 "is_configured": false, 00:14:08.032 "data_offset": 2048, 00:14:08.032 "data_size": 63488 00:14:08.032 }, 00:14:08.032 { 00:14:08.032 "name": "BaseBdev3", 00:14:08.032 "uuid": "412dd4be-e10a-569e-b726-c521ca0f7da0", 00:14:08.032 "is_configured": true, 00:14:08.032 "data_offset": 2048, 00:14:08.032 "data_size": 63488 00:14:08.032 }, 00:14:08.032 { 00:14:08.032 "name": "BaseBdev4", 00:14:08.032 "uuid": "f2f13f59-dd73-5b29-b313-2f94bc75a45e", 00:14:08.032 "is_configured": true, 00:14:08.032 "data_offset": 2048, 00:14:08.032 "data_size": 63488 00:14:08.032 } 00:14:08.032 ] 00:14:08.032 }' 00:14:08.032 12:06:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:08.291 12:06:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:08.291 12:06:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:08.291 12:06:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:08.291 12:06:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:08.291 12:06:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.291 12:06:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.291 [2024-11-19 12:06:11.456269] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:08.291 [2024-11-19 12:06:11.509641] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:08.291 [2024-11-19 12:06:11.509699] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:08.291 [2024-11-19 12:06:11.509719] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:08.291 [2024-11-19 12:06:11.509726] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:08.291 12:06:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.291 12:06:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:08.291 12:06:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:08.291 12:06:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:08.291 12:06:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:08.291 12:06:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:08.291 12:06:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:08.291 12:06:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.291 12:06:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.291 12:06:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.291 12:06:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.291 12:06:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.291 12:06:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.291 12:06:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.291 12:06:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.291 12:06:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.291 12:06:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:08.291 "name": "raid_bdev1", 00:14:08.291 "uuid": "aab8a4c6-3aec-4703-a5cf-f07b8a4f791b", 00:14:08.291 "strip_size_kb": 0, 00:14:08.291 "state": "online", 00:14:08.291 "raid_level": "raid1", 00:14:08.291 "superblock": true, 00:14:08.291 "num_base_bdevs": 4, 00:14:08.291 "num_base_bdevs_discovered": 2, 00:14:08.291 "num_base_bdevs_operational": 2, 00:14:08.291 "base_bdevs_list": [ 00:14:08.291 { 00:14:08.291 "name": null, 00:14:08.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.291 "is_configured": false, 00:14:08.291 "data_offset": 0, 00:14:08.291 "data_size": 63488 00:14:08.291 }, 00:14:08.291 { 00:14:08.291 "name": null, 00:14:08.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.291 "is_configured": false, 00:14:08.291 "data_offset": 2048, 00:14:08.291 "data_size": 63488 00:14:08.291 }, 00:14:08.291 { 00:14:08.291 "name": "BaseBdev3", 00:14:08.291 "uuid": "412dd4be-e10a-569e-b726-c521ca0f7da0", 00:14:08.291 "is_configured": true, 00:14:08.291 "data_offset": 2048, 00:14:08.291 "data_size": 63488 00:14:08.291 }, 00:14:08.291 { 00:14:08.291 "name": "BaseBdev4", 00:14:08.291 "uuid": "f2f13f59-dd73-5b29-b313-2f94bc75a45e", 00:14:08.291 "is_configured": true, 00:14:08.291 "data_offset": 2048, 00:14:08.291 "data_size": 63488 00:14:08.291 } 00:14:08.291 ] 00:14:08.291 }' 00:14:08.291 12:06:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:08.291 12:06:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.860 12:06:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:08.860 12:06:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.860 12:06:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.860 [2024-11-19 12:06:11.938651] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:08.860 [2024-11-19 12:06:11.938752] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:08.860 [2024-11-19 12:06:11.938791] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:14:08.860 [2024-11-19 12:06:11.938818] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:08.860 [2024-11-19 12:06:11.939359] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:08.860 [2024-11-19 12:06:11.939421] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:08.860 [2024-11-19 12:06:11.939543] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:08.860 [2024-11-19 12:06:11.939583] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:08.860 [2024-11-19 12:06:11.939627] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:08.860 [2024-11-19 12:06:11.939700] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:08.860 [2024-11-19 12:06:11.953688] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:14:08.860 spare 00:14:08.860 12:06:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.860 12:06:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:08.860 [2024-11-19 12:06:11.955544] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:09.799 12:06:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:09.799 12:06:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:09.799 12:06:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:09.799 12:06:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:09.799 12:06:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:09.799 12:06:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.799 12:06:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.799 12:06:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.799 12:06:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.799 12:06:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.799 12:06:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:09.799 "name": "raid_bdev1", 00:14:09.800 "uuid": "aab8a4c6-3aec-4703-a5cf-f07b8a4f791b", 00:14:09.800 "strip_size_kb": 0, 00:14:09.800 "state": "online", 00:14:09.800 "raid_level": "raid1", 00:14:09.800 "superblock": true, 00:14:09.800 "num_base_bdevs": 4, 00:14:09.800 "num_base_bdevs_discovered": 3, 00:14:09.800 "num_base_bdevs_operational": 3, 00:14:09.800 "process": { 00:14:09.800 "type": "rebuild", 00:14:09.800 "target": "spare", 00:14:09.800 "progress": { 00:14:09.800 "blocks": 20480, 00:14:09.800 "percent": 32 00:14:09.800 } 00:14:09.800 }, 00:14:09.800 "base_bdevs_list": [ 00:14:09.800 { 00:14:09.800 "name": "spare", 00:14:09.800 "uuid": "4e2a95e4-a2e9-567d-bd8f-8f951f4aa60e", 00:14:09.800 "is_configured": true, 00:14:09.800 "data_offset": 2048, 00:14:09.800 "data_size": 63488 00:14:09.800 }, 00:14:09.800 { 00:14:09.800 "name": null, 00:14:09.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.800 "is_configured": false, 00:14:09.800 "data_offset": 2048, 00:14:09.800 "data_size": 63488 00:14:09.800 }, 00:14:09.800 { 00:14:09.800 "name": "BaseBdev3", 00:14:09.800 "uuid": "412dd4be-e10a-569e-b726-c521ca0f7da0", 00:14:09.800 "is_configured": true, 00:14:09.800 "data_offset": 2048, 00:14:09.800 "data_size": 63488 00:14:09.800 }, 00:14:09.800 { 00:14:09.800 "name": "BaseBdev4", 00:14:09.800 "uuid": "f2f13f59-dd73-5b29-b313-2f94bc75a45e", 00:14:09.800 "is_configured": true, 00:14:09.800 "data_offset": 2048, 00:14:09.800 "data_size": 63488 00:14:09.800 } 00:14:09.800 ] 00:14:09.800 }' 00:14:09.800 12:06:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:09.800 12:06:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:09.800 12:06:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:09.800 12:06:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:09.800 12:06:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:09.800 12:06:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.800 12:06:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.800 [2024-11-19 12:06:13.123423] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:09.800 [2024-11-19 12:06:13.160369] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:09.800 [2024-11-19 12:06:13.160421] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:09.800 [2024-11-19 12:06:13.160462] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:09.800 [2024-11-19 12:06:13.160470] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:10.059 12:06:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.059 12:06:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:10.059 12:06:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:10.059 12:06:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:10.059 12:06:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:10.059 12:06:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:10.059 12:06:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:10.059 12:06:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.059 12:06:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.059 12:06:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.059 12:06:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.059 12:06:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.059 12:06:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.059 12:06:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.059 12:06:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.059 12:06:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.059 12:06:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.059 "name": "raid_bdev1", 00:14:10.060 "uuid": "aab8a4c6-3aec-4703-a5cf-f07b8a4f791b", 00:14:10.060 "strip_size_kb": 0, 00:14:10.060 "state": "online", 00:14:10.060 "raid_level": "raid1", 00:14:10.060 "superblock": true, 00:14:10.060 "num_base_bdevs": 4, 00:14:10.060 "num_base_bdevs_discovered": 2, 00:14:10.060 "num_base_bdevs_operational": 2, 00:14:10.060 "base_bdevs_list": [ 00:14:10.060 { 00:14:10.060 "name": null, 00:14:10.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.060 "is_configured": false, 00:14:10.060 "data_offset": 0, 00:14:10.060 "data_size": 63488 00:14:10.060 }, 00:14:10.060 { 00:14:10.060 "name": null, 00:14:10.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.060 "is_configured": false, 00:14:10.060 "data_offset": 2048, 00:14:10.060 "data_size": 63488 00:14:10.060 }, 00:14:10.060 { 00:14:10.060 "name": "BaseBdev3", 00:14:10.060 "uuid": "412dd4be-e10a-569e-b726-c521ca0f7da0", 00:14:10.060 "is_configured": true, 00:14:10.060 "data_offset": 2048, 00:14:10.060 "data_size": 63488 00:14:10.060 }, 00:14:10.060 { 00:14:10.060 "name": "BaseBdev4", 00:14:10.060 "uuid": "f2f13f59-dd73-5b29-b313-2f94bc75a45e", 00:14:10.060 "is_configured": true, 00:14:10.060 "data_offset": 2048, 00:14:10.060 "data_size": 63488 00:14:10.060 } 00:14:10.060 ] 00:14:10.060 }' 00:14:10.060 12:06:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.060 12:06:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.320 12:06:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:10.320 12:06:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:10.320 12:06:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:10.320 12:06:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:10.320 12:06:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:10.320 12:06:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.320 12:06:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.320 12:06:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.320 12:06:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.320 12:06:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.320 12:06:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:10.320 "name": "raid_bdev1", 00:14:10.320 "uuid": "aab8a4c6-3aec-4703-a5cf-f07b8a4f791b", 00:14:10.320 "strip_size_kb": 0, 00:14:10.320 "state": "online", 00:14:10.320 "raid_level": "raid1", 00:14:10.320 "superblock": true, 00:14:10.320 "num_base_bdevs": 4, 00:14:10.320 "num_base_bdevs_discovered": 2, 00:14:10.320 "num_base_bdevs_operational": 2, 00:14:10.320 "base_bdevs_list": [ 00:14:10.320 { 00:14:10.320 "name": null, 00:14:10.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.320 "is_configured": false, 00:14:10.320 "data_offset": 0, 00:14:10.320 "data_size": 63488 00:14:10.320 }, 00:14:10.320 { 00:14:10.320 "name": null, 00:14:10.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.320 "is_configured": false, 00:14:10.320 "data_offset": 2048, 00:14:10.320 "data_size": 63488 00:14:10.320 }, 00:14:10.320 { 00:14:10.320 "name": "BaseBdev3", 00:14:10.320 "uuid": "412dd4be-e10a-569e-b726-c521ca0f7da0", 00:14:10.320 "is_configured": true, 00:14:10.320 "data_offset": 2048, 00:14:10.320 "data_size": 63488 00:14:10.320 }, 00:14:10.320 { 00:14:10.320 "name": "BaseBdev4", 00:14:10.320 "uuid": "f2f13f59-dd73-5b29-b313-2f94bc75a45e", 00:14:10.320 "is_configured": true, 00:14:10.320 "data_offset": 2048, 00:14:10.320 "data_size": 63488 00:14:10.320 } 00:14:10.320 ] 00:14:10.320 }' 00:14:10.320 12:06:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:10.320 12:06:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:10.320 12:06:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:10.580 12:06:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:10.580 12:06:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:10.580 12:06:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.580 12:06:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.580 12:06:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.580 12:06:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:10.580 12:06:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.580 12:06:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.580 [2024-11-19 12:06:13.716189] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:10.580 [2024-11-19 12:06:13.716264] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:10.580 [2024-11-19 12:06:13.716284] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:14:10.580 [2024-11-19 12:06:13.716294] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:10.580 [2024-11-19 12:06:13.716737] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:10.580 [2024-11-19 12:06:13.716757] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:10.580 [2024-11-19 12:06:13.716831] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:10.580 [2024-11-19 12:06:13.716846] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:10.580 [2024-11-19 12:06:13.716854] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:10.580 [2024-11-19 12:06:13.716878] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:10.580 BaseBdev1 00:14:10.580 12:06:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.580 12:06:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:11.520 12:06:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:11.520 12:06:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:11.520 12:06:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:11.520 12:06:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:11.520 12:06:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:11.520 12:06:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:11.520 12:06:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:11.520 12:06:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:11.520 12:06:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:11.520 12:06:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:11.520 12:06:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.520 12:06:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.520 12:06:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.520 12:06:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.520 12:06:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.520 12:06:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:11.520 "name": "raid_bdev1", 00:14:11.520 "uuid": "aab8a4c6-3aec-4703-a5cf-f07b8a4f791b", 00:14:11.520 "strip_size_kb": 0, 00:14:11.520 "state": "online", 00:14:11.520 "raid_level": "raid1", 00:14:11.520 "superblock": true, 00:14:11.520 "num_base_bdevs": 4, 00:14:11.520 "num_base_bdevs_discovered": 2, 00:14:11.520 "num_base_bdevs_operational": 2, 00:14:11.520 "base_bdevs_list": [ 00:14:11.520 { 00:14:11.520 "name": null, 00:14:11.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.520 "is_configured": false, 00:14:11.520 "data_offset": 0, 00:14:11.520 "data_size": 63488 00:14:11.520 }, 00:14:11.520 { 00:14:11.520 "name": null, 00:14:11.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.520 "is_configured": false, 00:14:11.520 "data_offset": 2048, 00:14:11.520 "data_size": 63488 00:14:11.520 }, 00:14:11.520 { 00:14:11.520 "name": "BaseBdev3", 00:14:11.520 "uuid": "412dd4be-e10a-569e-b726-c521ca0f7da0", 00:14:11.520 "is_configured": true, 00:14:11.520 "data_offset": 2048, 00:14:11.520 "data_size": 63488 00:14:11.520 }, 00:14:11.520 { 00:14:11.520 "name": "BaseBdev4", 00:14:11.520 "uuid": "f2f13f59-dd73-5b29-b313-2f94bc75a45e", 00:14:11.520 "is_configured": true, 00:14:11.520 "data_offset": 2048, 00:14:11.520 "data_size": 63488 00:14:11.520 } 00:14:11.520 ] 00:14:11.520 }' 00:14:11.520 12:06:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:11.520 12:06:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.780 12:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:11.780 12:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:11.780 12:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:11.780 12:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:11.780 12:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:11.780 12:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.780 12:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.780 12:06:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.780 12:06:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.040 12:06:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.040 12:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:12.040 "name": "raid_bdev1", 00:14:12.040 "uuid": "aab8a4c6-3aec-4703-a5cf-f07b8a4f791b", 00:14:12.040 "strip_size_kb": 0, 00:14:12.040 "state": "online", 00:14:12.040 "raid_level": "raid1", 00:14:12.040 "superblock": true, 00:14:12.040 "num_base_bdevs": 4, 00:14:12.040 "num_base_bdevs_discovered": 2, 00:14:12.040 "num_base_bdevs_operational": 2, 00:14:12.040 "base_bdevs_list": [ 00:14:12.040 { 00:14:12.040 "name": null, 00:14:12.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.040 "is_configured": false, 00:14:12.040 "data_offset": 0, 00:14:12.040 "data_size": 63488 00:14:12.040 }, 00:14:12.040 { 00:14:12.040 "name": null, 00:14:12.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.040 "is_configured": false, 00:14:12.040 "data_offset": 2048, 00:14:12.040 "data_size": 63488 00:14:12.040 }, 00:14:12.040 { 00:14:12.040 "name": "BaseBdev3", 00:14:12.040 "uuid": "412dd4be-e10a-569e-b726-c521ca0f7da0", 00:14:12.040 "is_configured": true, 00:14:12.040 "data_offset": 2048, 00:14:12.040 "data_size": 63488 00:14:12.040 }, 00:14:12.040 { 00:14:12.040 "name": "BaseBdev4", 00:14:12.040 "uuid": "f2f13f59-dd73-5b29-b313-2f94bc75a45e", 00:14:12.040 "is_configured": true, 00:14:12.040 "data_offset": 2048, 00:14:12.040 "data_size": 63488 00:14:12.040 } 00:14:12.040 ] 00:14:12.040 }' 00:14:12.040 12:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:12.040 12:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:12.040 12:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:12.040 12:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:12.040 12:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:12.040 12:06:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:14:12.040 12:06:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:12.041 12:06:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:12.041 12:06:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:12.041 12:06:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:12.041 12:06:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:12.041 12:06:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:12.041 12:06:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.041 12:06:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.041 [2024-11-19 12:06:15.269930] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:12.041 [2024-11-19 12:06:15.270137] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:12.041 [2024-11-19 12:06:15.270159] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:12.041 request: 00:14:12.041 { 00:14:12.041 "base_bdev": "BaseBdev1", 00:14:12.041 "raid_bdev": "raid_bdev1", 00:14:12.041 "method": "bdev_raid_add_base_bdev", 00:14:12.041 "req_id": 1 00:14:12.041 } 00:14:12.041 Got JSON-RPC error response 00:14:12.041 response: 00:14:12.041 { 00:14:12.041 "code": -22, 00:14:12.041 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:12.041 } 00:14:12.041 12:06:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:12.041 12:06:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:14:12.041 12:06:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:12.041 12:06:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:12.041 12:06:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:12.041 12:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:12.979 12:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:12.979 12:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:12.979 12:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:12.979 12:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:12.979 12:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:12.979 12:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:12.979 12:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:12.979 12:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:12.979 12:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:12.979 12:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:12.979 12:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.979 12:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.979 12:06:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.979 12:06:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.979 12:06:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.979 12:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:12.979 "name": "raid_bdev1", 00:14:12.979 "uuid": "aab8a4c6-3aec-4703-a5cf-f07b8a4f791b", 00:14:12.979 "strip_size_kb": 0, 00:14:12.979 "state": "online", 00:14:12.979 "raid_level": "raid1", 00:14:12.979 "superblock": true, 00:14:12.979 "num_base_bdevs": 4, 00:14:12.979 "num_base_bdevs_discovered": 2, 00:14:12.979 "num_base_bdevs_operational": 2, 00:14:12.979 "base_bdevs_list": [ 00:14:12.979 { 00:14:12.979 "name": null, 00:14:12.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.979 "is_configured": false, 00:14:12.979 "data_offset": 0, 00:14:12.979 "data_size": 63488 00:14:12.979 }, 00:14:12.979 { 00:14:12.979 "name": null, 00:14:12.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.979 "is_configured": false, 00:14:12.979 "data_offset": 2048, 00:14:12.979 "data_size": 63488 00:14:12.979 }, 00:14:12.979 { 00:14:12.979 "name": "BaseBdev3", 00:14:12.979 "uuid": "412dd4be-e10a-569e-b726-c521ca0f7da0", 00:14:12.979 "is_configured": true, 00:14:12.979 "data_offset": 2048, 00:14:12.979 "data_size": 63488 00:14:12.979 }, 00:14:12.979 { 00:14:12.979 "name": "BaseBdev4", 00:14:12.979 "uuid": "f2f13f59-dd73-5b29-b313-2f94bc75a45e", 00:14:12.979 "is_configured": true, 00:14:12.979 "data_offset": 2048, 00:14:12.979 "data_size": 63488 00:14:12.979 } 00:14:12.979 ] 00:14:12.979 }' 00:14:12.979 12:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:12.979 12:06:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.549 12:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:13.549 12:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:13.549 12:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:13.549 12:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:13.549 12:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:13.549 12:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.549 12:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.549 12:06:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.549 12:06:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.549 12:06:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.549 12:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:13.549 "name": "raid_bdev1", 00:14:13.549 "uuid": "aab8a4c6-3aec-4703-a5cf-f07b8a4f791b", 00:14:13.549 "strip_size_kb": 0, 00:14:13.549 "state": "online", 00:14:13.549 "raid_level": "raid1", 00:14:13.549 "superblock": true, 00:14:13.549 "num_base_bdevs": 4, 00:14:13.549 "num_base_bdevs_discovered": 2, 00:14:13.549 "num_base_bdevs_operational": 2, 00:14:13.549 "base_bdevs_list": [ 00:14:13.549 { 00:14:13.549 "name": null, 00:14:13.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.549 "is_configured": false, 00:14:13.549 "data_offset": 0, 00:14:13.549 "data_size": 63488 00:14:13.549 }, 00:14:13.549 { 00:14:13.549 "name": null, 00:14:13.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.549 "is_configured": false, 00:14:13.549 "data_offset": 2048, 00:14:13.549 "data_size": 63488 00:14:13.549 }, 00:14:13.549 { 00:14:13.549 "name": "BaseBdev3", 00:14:13.549 "uuid": "412dd4be-e10a-569e-b726-c521ca0f7da0", 00:14:13.549 "is_configured": true, 00:14:13.549 "data_offset": 2048, 00:14:13.549 "data_size": 63488 00:14:13.549 }, 00:14:13.549 { 00:14:13.549 "name": "BaseBdev4", 00:14:13.549 "uuid": "f2f13f59-dd73-5b29-b313-2f94bc75a45e", 00:14:13.549 "is_configured": true, 00:14:13.549 "data_offset": 2048, 00:14:13.549 "data_size": 63488 00:14:13.549 } 00:14:13.549 ] 00:14:13.549 }' 00:14:13.549 12:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:13.549 12:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:13.549 12:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:13.549 12:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:13.549 12:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 77970 00:14:13.549 12:06:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 77970 ']' 00:14:13.549 12:06:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 77970 00:14:13.549 12:06:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:13.549 12:06:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:13.549 12:06:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77970 00:14:13.549 killing process with pid 77970 00:14:13.549 Received shutdown signal, test time was about 60.000000 seconds 00:14:13.549 00:14:13.549 Latency(us) 00:14:13.549 [2024-11-19T12:06:16.926Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:13.549 [2024-11-19T12:06:16.926Z] =================================================================================================================== 00:14:13.549 [2024-11-19T12:06:16.926Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:13.549 12:06:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:13.549 12:06:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:13.549 12:06:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77970' 00:14:13.549 12:06:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 77970 00:14:13.549 [2024-11-19 12:06:16.886725] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:13.549 [2024-11-19 12:06:16.886857] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:13.549 12:06:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 77970 00:14:13.549 [2024-11-19 12:06:16.886921] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:13.549 [2024-11-19 12:06:16.886930] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:14.119 [2024-11-19 12:06:17.350835] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:15.059 12:06:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:14:15.059 00:14:15.059 real 0m24.228s 00:14:15.059 user 0m29.623s 00:14:15.059 sys 0m3.416s 00:14:15.059 12:06:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:15.059 12:06:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.059 ************************************ 00:14:15.059 END TEST raid_rebuild_test_sb 00:14:15.059 ************************************ 00:14:15.320 12:06:18 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:14:15.320 12:06:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:15.320 12:06:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:15.320 12:06:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:15.320 ************************************ 00:14:15.320 START TEST raid_rebuild_test_io 00:14:15.320 ************************************ 00:14:15.320 12:06:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:14:15.320 12:06:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:15.320 12:06:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:15.320 12:06:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:15.320 12:06:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:15.320 12:06:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:15.320 12:06:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:15.320 12:06:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:15.320 12:06:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:15.320 12:06:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:15.320 12:06:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:15.320 12:06:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:15.320 12:06:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:15.320 12:06:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:15.320 12:06:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:15.320 12:06:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:15.320 12:06:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:15.321 12:06:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:15.321 12:06:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:15.321 12:06:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:15.321 12:06:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:15.321 12:06:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:15.321 12:06:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:15.321 12:06:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:15.321 12:06:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:15.321 12:06:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:15.321 12:06:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:15.321 12:06:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:15.321 12:06:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:15.321 12:06:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:15.321 12:06:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78715 00:14:15.321 12:06:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78715 00:14:15.321 12:06:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:15.321 12:06:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 78715 ']' 00:14:15.321 12:06:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:15.321 12:06:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:15.321 12:06:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:15.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:15.321 12:06:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:15.321 12:06:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.321 [2024-11-19 12:06:18.554831] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:14:15.321 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:15.321 Zero copy mechanism will not be used. 00:14:15.321 [2024-11-19 12:06:18.555023] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78715 ] 00:14:15.581 [2024-11-19 12:06:18.729869] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:15.581 [2024-11-19 12:06:18.840027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:15.841 [2024-11-19 12:06:19.030796] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:15.841 [2024-11-19 12:06:19.030836] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:16.102 12:06:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:16.102 12:06:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:14:16.102 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:16.102 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:16.102 12:06:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.102 12:06:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.102 BaseBdev1_malloc 00:14:16.102 12:06:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.102 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:16.102 12:06:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.102 12:06:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.102 [2024-11-19 12:06:19.421793] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:16.102 [2024-11-19 12:06:19.421913] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:16.102 [2024-11-19 12:06:19.421939] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:16.102 [2024-11-19 12:06:19.421951] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:16.102 [2024-11-19 12:06:19.424010] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:16.102 [2024-11-19 12:06:19.424047] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:16.102 BaseBdev1 00:14:16.102 12:06:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.102 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:16.102 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:16.102 12:06:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.102 12:06:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.102 BaseBdev2_malloc 00:14:16.102 12:06:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.102 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:16.102 12:06:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.102 12:06:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.102 [2024-11-19 12:06:19.470341] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:16.102 [2024-11-19 12:06:19.470399] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:16.102 [2024-11-19 12:06:19.470416] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:16.102 [2024-11-19 12:06:19.470427] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:16.102 [2024-11-19 12:06:19.472473] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:16.102 [2024-11-19 12:06:19.472565] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:16.376 BaseBdev2 00:14:16.376 12:06:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.376 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:16.376 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:16.376 12:06:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.376 12:06:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.376 BaseBdev3_malloc 00:14:16.376 12:06:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.376 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:16.376 12:06:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.376 12:06:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.376 [2024-11-19 12:06:19.538317] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:16.376 [2024-11-19 12:06:19.538373] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:16.376 [2024-11-19 12:06:19.538393] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:16.376 [2024-11-19 12:06:19.538404] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:16.376 [2024-11-19 12:06:19.540563] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:16.376 [2024-11-19 12:06:19.540605] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:16.376 BaseBdev3 00:14:16.376 12:06:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.376 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:16.376 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:16.376 12:06:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.376 12:06:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.376 BaseBdev4_malloc 00:14:16.376 12:06:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.376 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:16.376 12:06:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.376 12:06:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.376 [2024-11-19 12:06:19.592065] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:16.376 [2024-11-19 12:06:19.592183] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:16.376 [2024-11-19 12:06:19.592210] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:16.376 [2024-11-19 12:06:19.592223] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:16.376 [2024-11-19 12:06:19.594325] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:16.376 [2024-11-19 12:06:19.594365] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:16.376 BaseBdev4 00:14:16.376 12:06:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.376 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:16.376 12:06:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.376 12:06:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.376 spare_malloc 00:14:16.376 12:06:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.376 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:16.376 12:06:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.376 12:06:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.376 spare_delay 00:14:16.376 12:06:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.376 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:16.376 12:06:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.376 12:06:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.376 [2024-11-19 12:06:19.656640] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:16.376 [2024-11-19 12:06:19.656746] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:16.376 [2024-11-19 12:06:19.656769] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:16.376 [2024-11-19 12:06:19.656780] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:16.376 [2024-11-19 12:06:19.658755] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:16.376 [2024-11-19 12:06:19.658796] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:16.376 spare 00:14:16.376 12:06:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.376 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:16.376 12:06:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.376 12:06:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.376 [2024-11-19 12:06:19.668665] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:16.376 [2024-11-19 12:06:19.670416] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:16.376 [2024-11-19 12:06:19.670481] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:16.376 [2024-11-19 12:06:19.670529] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:16.376 [2024-11-19 12:06:19.670603] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:16.376 [2024-11-19 12:06:19.670615] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:16.376 [2024-11-19 12:06:19.670863] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:16.376 [2024-11-19 12:06:19.671036] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:16.376 [2024-11-19 12:06:19.671049] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:16.376 [2024-11-19 12:06:19.671212] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:16.376 12:06:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.376 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:16.376 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:16.376 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:16.376 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:16.376 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:16.376 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:16.376 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.376 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.376 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.376 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.376 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.376 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.376 12:06:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.376 12:06:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.376 12:06:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.376 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:16.376 "name": "raid_bdev1", 00:14:16.376 "uuid": "6749ceea-aa78-470f-a9a3-d4bf94cadcbf", 00:14:16.376 "strip_size_kb": 0, 00:14:16.376 "state": "online", 00:14:16.376 "raid_level": "raid1", 00:14:16.376 "superblock": false, 00:14:16.376 "num_base_bdevs": 4, 00:14:16.376 "num_base_bdevs_discovered": 4, 00:14:16.376 "num_base_bdevs_operational": 4, 00:14:16.376 "base_bdevs_list": [ 00:14:16.376 { 00:14:16.376 "name": "BaseBdev1", 00:14:16.376 "uuid": "051155db-bc0d-57dd-b09a-1f44d0c8e2dc", 00:14:16.376 "is_configured": true, 00:14:16.376 "data_offset": 0, 00:14:16.376 "data_size": 65536 00:14:16.376 }, 00:14:16.376 { 00:14:16.376 "name": "BaseBdev2", 00:14:16.376 "uuid": "5880f35b-58ee-5e52-ba5b-0f2b2857b019", 00:14:16.376 "is_configured": true, 00:14:16.376 "data_offset": 0, 00:14:16.376 "data_size": 65536 00:14:16.376 }, 00:14:16.376 { 00:14:16.376 "name": "BaseBdev3", 00:14:16.376 "uuid": "47d51ad3-d7c8-50f6-90b9-23e95db59ee2", 00:14:16.376 "is_configured": true, 00:14:16.376 "data_offset": 0, 00:14:16.376 "data_size": 65536 00:14:16.376 }, 00:14:16.376 { 00:14:16.377 "name": "BaseBdev4", 00:14:16.377 "uuid": "2212c7bd-38e5-51b0-91d1-a4951769d598", 00:14:16.377 "is_configured": true, 00:14:16.377 "data_offset": 0, 00:14:16.377 "data_size": 65536 00:14:16.377 } 00:14:16.377 ] 00:14:16.377 }' 00:14:16.377 12:06:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:16.377 12:06:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.961 12:06:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:16.961 12:06:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:16.961 12:06:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.961 12:06:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.961 [2024-11-19 12:06:20.164151] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:16.961 12:06:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.961 12:06:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:16.961 12:06:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.961 12:06:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.961 12:06:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:16.961 12:06:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.961 12:06:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.961 12:06:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:16.961 12:06:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:16.961 12:06:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:16.961 12:06:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:16.961 12:06:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.961 12:06:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.961 [2024-11-19 12:06:20.239662] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:16.961 12:06:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.961 12:06:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:16.961 12:06:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:16.961 12:06:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:16.961 12:06:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:16.961 12:06:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:16.961 12:06:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:16.961 12:06:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.961 12:06:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.961 12:06:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.961 12:06:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.961 12:06:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.961 12:06:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.962 12:06:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.962 12:06:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.962 12:06:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.962 12:06:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:16.962 "name": "raid_bdev1", 00:14:16.962 "uuid": "6749ceea-aa78-470f-a9a3-d4bf94cadcbf", 00:14:16.962 "strip_size_kb": 0, 00:14:16.962 "state": "online", 00:14:16.962 "raid_level": "raid1", 00:14:16.962 "superblock": false, 00:14:16.962 "num_base_bdevs": 4, 00:14:16.962 "num_base_bdevs_discovered": 3, 00:14:16.962 "num_base_bdevs_operational": 3, 00:14:16.962 "base_bdevs_list": [ 00:14:16.962 { 00:14:16.962 "name": null, 00:14:16.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.962 "is_configured": false, 00:14:16.962 "data_offset": 0, 00:14:16.962 "data_size": 65536 00:14:16.962 }, 00:14:16.962 { 00:14:16.962 "name": "BaseBdev2", 00:14:16.962 "uuid": "5880f35b-58ee-5e52-ba5b-0f2b2857b019", 00:14:16.962 "is_configured": true, 00:14:16.962 "data_offset": 0, 00:14:16.962 "data_size": 65536 00:14:16.962 }, 00:14:16.962 { 00:14:16.962 "name": "BaseBdev3", 00:14:16.962 "uuid": "47d51ad3-d7c8-50f6-90b9-23e95db59ee2", 00:14:16.962 "is_configured": true, 00:14:16.962 "data_offset": 0, 00:14:16.962 "data_size": 65536 00:14:16.962 }, 00:14:16.962 { 00:14:16.962 "name": "BaseBdev4", 00:14:16.962 "uuid": "2212c7bd-38e5-51b0-91d1-a4951769d598", 00:14:16.962 "is_configured": true, 00:14:16.962 "data_offset": 0, 00:14:16.962 "data_size": 65536 00:14:16.962 } 00:14:16.962 ] 00:14:16.962 }' 00:14:16.962 12:06:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:16.962 12:06:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.962 [2024-11-19 12:06:20.331324] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:16.962 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:16.962 Zero copy mechanism will not be used. 00:14:16.962 Running I/O for 60 seconds... 00:14:17.531 12:06:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:17.531 12:06:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.531 12:06:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:17.531 [2024-11-19 12:06:20.653295] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:17.531 12:06:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.531 12:06:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:17.531 [2024-11-19 12:06:20.693868] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:14:17.531 [2024-11-19 12:06:20.695841] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:17.531 [2024-11-19 12:06:20.803160] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:17.531 [2024-11-19 12:06:20.804733] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:17.791 [2024-11-19 12:06:21.015297] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:17.791 [2024-11-19 12:06:21.015725] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:18.051 [2024-11-19 12:06:21.271778] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:18.311 161.00 IOPS, 483.00 MiB/s [2024-11-19T12:06:21.688Z] [2024-11-19 12:06:21.500335] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:18.311 [2024-11-19 12:06:21.500717] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:18.570 12:06:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:18.571 12:06:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:18.571 12:06:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:18.571 12:06:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:18.571 12:06:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:18.571 12:06:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.571 12:06:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.571 12:06:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.571 12:06:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.571 12:06:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.571 12:06:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:18.571 "name": "raid_bdev1", 00:14:18.571 "uuid": "6749ceea-aa78-470f-a9a3-d4bf94cadcbf", 00:14:18.571 "strip_size_kb": 0, 00:14:18.571 "state": "online", 00:14:18.571 "raid_level": "raid1", 00:14:18.571 "superblock": false, 00:14:18.571 "num_base_bdevs": 4, 00:14:18.571 "num_base_bdevs_discovered": 4, 00:14:18.571 "num_base_bdevs_operational": 4, 00:14:18.571 "process": { 00:14:18.571 "type": "rebuild", 00:14:18.571 "target": "spare", 00:14:18.571 "progress": { 00:14:18.571 "blocks": 10240, 00:14:18.571 "percent": 15 00:14:18.571 } 00:14:18.571 }, 00:14:18.571 "base_bdevs_list": [ 00:14:18.571 { 00:14:18.571 "name": "spare", 00:14:18.571 "uuid": "e6a4079b-f4b1-58e2-a92c-3d772f9005a4", 00:14:18.571 "is_configured": true, 00:14:18.571 "data_offset": 0, 00:14:18.571 "data_size": 65536 00:14:18.571 }, 00:14:18.571 { 00:14:18.571 "name": "BaseBdev2", 00:14:18.571 "uuid": "5880f35b-58ee-5e52-ba5b-0f2b2857b019", 00:14:18.571 "is_configured": true, 00:14:18.571 "data_offset": 0, 00:14:18.571 "data_size": 65536 00:14:18.571 }, 00:14:18.571 { 00:14:18.571 "name": "BaseBdev3", 00:14:18.571 "uuid": "47d51ad3-d7c8-50f6-90b9-23e95db59ee2", 00:14:18.571 "is_configured": true, 00:14:18.571 "data_offset": 0, 00:14:18.571 "data_size": 65536 00:14:18.571 }, 00:14:18.571 { 00:14:18.571 "name": "BaseBdev4", 00:14:18.571 "uuid": "2212c7bd-38e5-51b0-91d1-a4951769d598", 00:14:18.571 "is_configured": true, 00:14:18.571 "data_offset": 0, 00:14:18.571 "data_size": 65536 00:14:18.571 } 00:14:18.571 ] 00:14:18.571 }' 00:14:18.571 12:06:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:18.571 12:06:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:18.571 12:06:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:18.571 12:06:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:18.571 12:06:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:18.571 12:06:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.571 12:06:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.571 [2024-11-19 12:06:21.824534] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:18.571 [2024-11-19 12:06:21.838850] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:18.571 [2024-11-19 12:06:21.941350] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:18.839 [2024-11-19 12:06:21.952187] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:18.839 [2024-11-19 12:06:21.952232] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:18.839 [2024-11-19 12:06:21.952247] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:18.839 [2024-11-19 12:06:21.969467] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:14:18.839 12:06:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.839 12:06:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:18.839 12:06:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:18.839 12:06:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:18.839 12:06:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:18.839 12:06:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:18.839 12:06:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:18.839 12:06:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:18.839 12:06:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:18.839 12:06:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:18.839 12:06:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:18.839 12:06:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.839 12:06:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.839 12:06:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.839 12:06:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.839 12:06:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.839 12:06:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:18.839 "name": "raid_bdev1", 00:14:18.839 "uuid": "6749ceea-aa78-470f-a9a3-d4bf94cadcbf", 00:14:18.839 "strip_size_kb": 0, 00:14:18.840 "state": "online", 00:14:18.840 "raid_level": "raid1", 00:14:18.840 "superblock": false, 00:14:18.840 "num_base_bdevs": 4, 00:14:18.840 "num_base_bdevs_discovered": 3, 00:14:18.840 "num_base_bdevs_operational": 3, 00:14:18.840 "base_bdevs_list": [ 00:14:18.840 { 00:14:18.840 "name": null, 00:14:18.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.840 "is_configured": false, 00:14:18.840 "data_offset": 0, 00:14:18.840 "data_size": 65536 00:14:18.840 }, 00:14:18.840 { 00:14:18.840 "name": "BaseBdev2", 00:14:18.840 "uuid": "5880f35b-58ee-5e52-ba5b-0f2b2857b019", 00:14:18.840 "is_configured": true, 00:14:18.840 "data_offset": 0, 00:14:18.840 "data_size": 65536 00:14:18.840 }, 00:14:18.840 { 00:14:18.840 "name": "BaseBdev3", 00:14:18.840 "uuid": "47d51ad3-d7c8-50f6-90b9-23e95db59ee2", 00:14:18.840 "is_configured": true, 00:14:18.840 "data_offset": 0, 00:14:18.840 "data_size": 65536 00:14:18.840 }, 00:14:18.840 { 00:14:18.840 "name": "BaseBdev4", 00:14:18.840 "uuid": "2212c7bd-38e5-51b0-91d1-a4951769d598", 00:14:18.840 "is_configured": true, 00:14:18.840 "data_offset": 0, 00:14:18.840 "data_size": 65536 00:14:18.840 } 00:14:18.840 ] 00:14:18.840 }' 00:14:18.840 12:06:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:18.840 12:06:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.103 143.00 IOPS, 429.00 MiB/s [2024-11-19T12:06:22.480Z] 12:06:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:19.103 12:06:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:19.103 12:06:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:19.103 12:06:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:19.103 12:06:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:19.103 12:06:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.103 12:06:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.103 12:06:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.103 12:06:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.103 12:06:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.103 12:06:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:19.103 "name": "raid_bdev1", 00:14:19.103 "uuid": "6749ceea-aa78-470f-a9a3-d4bf94cadcbf", 00:14:19.103 "strip_size_kb": 0, 00:14:19.103 "state": "online", 00:14:19.103 "raid_level": "raid1", 00:14:19.103 "superblock": false, 00:14:19.103 "num_base_bdevs": 4, 00:14:19.103 "num_base_bdevs_discovered": 3, 00:14:19.103 "num_base_bdevs_operational": 3, 00:14:19.103 "base_bdevs_list": [ 00:14:19.103 { 00:14:19.103 "name": null, 00:14:19.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.103 "is_configured": false, 00:14:19.103 "data_offset": 0, 00:14:19.103 "data_size": 65536 00:14:19.103 }, 00:14:19.103 { 00:14:19.103 "name": "BaseBdev2", 00:14:19.103 "uuid": "5880f35b-58ee-5e52-ba5b-0f2b2857b019", 00:14:19.103 "is_configured": true, 00:14:19.103 "data_offset": 0, 00:14:19.103 "data_size": 65536 00:14:19.103 }, 00:14:19.103 { 00:14:19.103 "name": "BaseBdev3", 00:14:19.103 "uuid": "47d51ad3-d7c8-50f6-90b9-23e95db59ee2", 00:14:19.103 "is_configured": true, 00:14:19.103 "data_offset": 0, 00:14:19.103 "data_size": 65536 00:14:19.103 }, 00:14:19.103 { 00:14:19.103 "name": "BaseBdev4", 00:14:19.103 "uuid": "2212c7bd-38e5-51b0-91d1-a4951769d598", 00:14:19.103 "is_configured": true, 00:14:19.103 "data_offset": 0, 00:14:19.103 "data_size": 65536 00:14:19.103 } 00:14:19.103 ] 00:14:19.103 }' 00:14:19.103 12:06:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:19.363 12:06:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:19.363 12:06:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:19.363 12:06:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:19.363 12:06:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:19.363 12:06:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.363 12:06:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.363 [2024-11-19 12:06:22.553365] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:19.363 12:06:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.363 12:06:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:19.363 [2024-11-19 12:06:22.604132] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:19.363 [2024-11-19 12:06:22.605982] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:19.363 [2024-11-19 12:06:22.719751] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:19.363 [2024-11-19 12:06:22.721167] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:19.623 [2024-11-19 12:06:22.931594] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:19.623 [2024-11-19 12:06:22.932022] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:19.883 [2024-11-19 12:06:23.165929] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:20.142 [2024-11-19 12:06:23.302691] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:20.402 140.00 IOPS, 420.00 MiB/s [2024-11-19T12:06:23.779Z] 12:06:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:20.402 12:06:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:20.402 12:06:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:20.402 12:06:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:20.402 12:06:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:20.402 12:06:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.402 12:06:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.402 12:06:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.402 12:06:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:20.402 12:06:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.402 [2024-11-19 12:06:23.630699] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:20.402 12:06:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:20.402 "name": "raid_bdev1", 00:14:20.402 "uuid": "6749ceea-aa78-470f-a9a3-d4bf94cadcbf", 00:14:20.402 "strip_size_kb": 0, 00:14:20.402 "state": "online", 00:14:20.402 "raid_level": "raid1", 00:14:20.402 "superblock": false, 00:14:20.402 "num_base_bdevs": 4, 00:14:20.402 "num_base_bdevs_discovered": 4, 00:14:20.402 "num_base_bdevs_operational": 4, 00:14:20.402 "process": { 00:14:20.402 "type": "rebuild", 00:14:20.402 "target": "spare", 00:14:20.402 "progress": { 00:14:20.402 "blocks": 12288, 00:14:20.402 "percent": 18 00:14:20.402 } 00:14:20.402 }, 00:14:20.402 "base_bdevs_list": [ 00:14:20.402 { 00:14:20.402 "name": "spare", 00:14:20.402 "uuid": "e6a4079b-f4b1-58e2-a92c-3d772f9005a4", 00:14:20.402 "is_configured": true, 00:14:20.402 "data_offset": 0, 00:14:20.402 "data_size": 65536 00:14:20.402 }, 00:14:20.402 { 00:14:20.402 "name": "BaseBdev2", 00:14:20.402 "uuid": "5880f35b-58ee-5e52-ba5b-0f2b2857b019", 00:14:20.402 "is_configured": true, 00:14:20.402 "data_offset": 0, 00:14:20.402 "data_size": 65536 00:14:20.402 }, 00:14:20.402 { 00:14:20.402 "name": "BaseBdev3", 00:14:20.402 "uuid": "47d51ad3-d7c8-50f6-90b9-23e95db59ee2", 00:14:20.402 "is_configured": true, 00:14:20.402 "data_offset": 0, 00:14:20.402 "data_size": 65536 00:14:20.402 }, 00:14:20.402 { 00:14:20.402 "name": "BaseBdev4", 00:14:20.402 "uuid": "2212c7bd-38e5-51b0-91d1-a4951769d598", 00:14:20.402 "is_configured": true, 00:14:20.402 "data_offset": 0, 00:14:20.402 "data_size": 65536 00:14:20.402 } 00:14:20.402 ] 00:14:20.402 }' 00:14:20.402 12:06:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:20.402 12:06:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:20.402 12:06:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:20.402 12:06:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:20.402 12:06:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:20.402 12:06:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:20.402 12:06:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:20.402 12:06:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:20.402 12:06:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:20.402 12:06:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.402 12:06:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:20.402 [2024-11-19 12:06:23.752219] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:20.662 [2024-11-19 12:06:23.840003] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:20.662 [2024-11-19 12:06:23.942613] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:14:20.662 [2024-11-19 12:06:23.942692] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:14:20.662 [2024-11-19 12:06:23.951048] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:20.662 [2024-11-19 12:06:23.951541] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:20.663 12:06:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.663 12:06:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:20.663 12:06:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:20.663 12:06:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:20.663 12:06:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:20.663 12:06:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:20.663 12:06:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:20.663 12:06:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:20.663 12:06:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.663 12:06:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.663 12:06:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.663 12:06:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:20.663 12:06:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.663 12:06:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:20.663 "name": "raid_bdev1", 00:14:20.663 "uuid": "6749ceea-aa78-470f-a9a3-d4bf94cadcbf", 00:14:20.663 "strip_size_kb": 0, 00:14:20.663 "state": "online", 00:14:20.663 "raid_level": "raid1", 00:14:20.663 "superblock": false, 00:14:20.663 "num_base_bdevs": 4, 00:14:20.663 "num_base_bdevs_discovered": 3, 00:14:20.663 "num_base_bdevs_operational": 3, 00:14:20.663 "process": { 00:14:20.663 "type": "rebuild", 00:14:20.663 "target": "spare", 00:14:20.663 "progress": { 00:14:20.663 "blocks": 16384, 00:14:20.663 "percent": 25 00:14:20.663 } 00:14:20.663 }, 00:14:20.663 "base_bdevs_list": [ 00:14:20.663 { 00:14:20.663 "name": "spare", 00:14:20.663 "uuid": "e6a4079b-f4b1-58e2-a92c-3d772f9005a4", 00:14:20.663 "is_configured": true, 00:14:20.663 "data_offset": 0, 00:14:20.663 "data_size": 65536 00:14:20.663 }, 00:14:20.663 { 00:14:20.663 "name": null, 00:14:20.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.663 "is_configured": false, 00:14:20.663 "data_offset": 0, 00:14:20.663 "data_size": 65536 00:14:20.663 }, 00:14:20.663 { 00:14:20.663 "name": "BaseBdev3", 00:14:20.663 "uuid": "47d51ad3-d7c8-50f6-90b9-23e95db59ee2", 00:14:20.663 "is_configured": true, 00:14:20.663 "data_offset": 0, 00:14:20.663 "data_size": 65536 00:14:20.663 }, 00:14:20.663 { 00:14:20.663 "name": "BaseBdev4", 00:14:20.663 "uuid": "2212c7bd-38e5-51b0-91d1-a4951769d598", 00:14:20.663 "is_configured": true, 00:14:20.663 "data_offset": 0, 00:14:20.663 "data_size": 65536 00:14:20.663 } 00:14:20.663 ] 00:14:20.663 }' 00:14:20.663 12:06:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:20.663 12:06:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:20.663 12:06:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:20.923 12:06:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:20.923 12:06:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=477 00:14:20.923 12:06:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:20.923 12:06:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:20.923 12:06:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:20.923 12:06:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:20.923 12:06:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:20.923 12:06:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:20.923 12:06:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.923 12:06:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.923 12:06:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.923 12:06:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:20.923 12:06:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.923 12:06:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:20.923 "name": "raid_bdev1", 00:14:20.923 "uuid": "6749ceea-aa78-470f-a9a3-d4bf94cadcbf", 00:14:20.923 "strip_size_kb": 0, 00:14:20.923 "state": "online", 00:14:20.923 "raid_level": "raid1", 00:14:20.923 "superblock": false, 00:14:20.923 "num_base_bdevs": 4, 00:14:20.923 "num_base_bdevs_discovered": 3, 00:14:20.923 "num_base_bdevs_operational": 3, 00:14:20.923 "process": { 00:14:20.923 "type": "rebuild", 00:14:20.923 "target": "spare", 00:14:20.923 "progress": { 00:14:20.923 "blocks": 18432, 00:14:20.923 "percent": 28 00:14:20.923 } 00:14:20.923 }, 00:14:20.923 "base_bdevs_list": [ 00:14:20.923 { 00:14:20.923 "name": "spare", 00:14:20.923 "uuid": "e6a4079b-f4b1-58e2-a92c-3d772f9005a4", 00:14:20.923 "is_configured": true, 00:14:20.923 "data_offset": 0, 00:14:20.923 "data_size": 65536 00:14:20.923 }, 00:14:20.923 { 00:14:20.923 "name": null, 00:14:20.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.923 "is_configured": false, 00:14:20.923 "data_offset": 0, 00:14:20.923 "data_size": 65536 00:14:20.923 }, 00:14:20.923 { 00:14:20.923 "name": "BaseBdev3", 00:14:20.923 "uuid": "47d51ad3-d7c8-50f6-90b9-23e95db59ee2", 00:14:20.923 "is_configured": true, 00:14:20.923 "data_offset": 0, 00:14:20.923 "data_size": 65536 00:14:20.923 }, 00:14:20.923 { 00:14:20.923 "name": "BaseBdev4", 00:14:20.923 "uuid": "2212c7bd-38e5-51b0-91d1-a4951769d598", 00:14:20.923 "is_configured": true, 00:14:20.923 "data_offset": 0, 00:14:20.923 "data_size": 65536 00:14:20.923 } 00:14:20.923 ] 00:14:20.923 }' 00:14:20.923 12:06:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:20.923 12:06:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:20.923 12:06:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:20.923 [2024-11-19 12:06:24.160031] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:20.923 [2024-11-19 12:06:24.160979] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:20.923 12:06:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:20.923 12:06:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:21.183 131.50 IOPS, 394.50 MiB/s [2024-11-19T12:06:24.560Z] [2024-11-19 12:06:24.375103] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:21.183 [2024-11-19 12:06:24.375664] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:21.443 [2024-11-19 12:06:24.715766] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:14:22.011 12:06:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:22.011 12:06:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:22.011 12:06:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:22.011 12:06:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:22.011 12:06:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:22.011 12:06:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:22.011 12:06:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.011 12:06:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.011 12:06:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.011 12:06:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.011 12:06:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.011 12:06:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:22.011 "name": "raid_bdev1", 00:14:22.011 "uuid": "6749ceea-aa78-470f-a9a3-d4bf94cadcbf", 00:14:22.011 "strip_size_kb": 0, 00:14:22.011 "state": "online", 00:14:22.011 "raid_level": "raid1", 00:14:22.011 "superblock": false, 00:14:22.011 "num_base_bdevs": 4, 00:14:22.011 "num_base_bdevs_discovered": 3, 00:14:22.011 "num_base_bdevs_operational": 3, 00:14:22.011 "process": { 00:14:22.011 "type": "rebuild", 00:14:22.011 "target": "spare", 00:14:22.011 "progress": { 00:14:22.011 "blocks": 30720, 00:14:22.011 "percent": 46 00:14:22.011 } 00:14:22.011 }, 00:14:22.011 "base_bdevs_list": [ 00:14:22.011 { 00:14:22.011 "name": "spare", 00:14:22.011 "uuid": "e6a4079b-f4b1-58e2-a92c-3d772f9005a4", 00:14:22.011 "is_configured": true, 00:14:22.011 "data_offset": 0, 00:14:22.011 "data_size": 65536 00:14:22.011 }, 00:14:22.011 { 00:14:22.011 "name": null, 00:14:22.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.011 "is_configured": false, 00:14:22.011 "data_offset": 0, 00:14:22.011 "data_size": 65536 00:14:22.011 }, 00:14:22.011 { 00:14:22.011 "name": "BaseBdev3", 00:14:22.011 "uuid": "47d51ad3-d7c8-50f6-90b9-23e95db59ee2", 00:14:22.011 "is_configured": true, 00:14:22.011 "data_offset": 0, 00:14:22.011 "data_size": 65536 00:14:22.011 }, 00:14:22.011 { 00:14:22.011 "name": "BaseBdev4", 00:14:22.011 "uuid": "2212c7bd-38e5-51b0-91d1-a4951769d598", 00:14:22.011 "is_configured": true, 00:14:22.011 "data_offset": 0, 00:14:22.011 "data_size": 65536 00:14:22.011 } 00:14:22.012 ] 00:14:22.012 }' 00:14:22.012 12:06:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:22.012 12:06:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:22.012 12:06:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:22.012 [2024-11-19 12:06:25.322193] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:14:22.012 112.40 IOPS, 337.20 MiB/s [2024-11-19T12:06:25.389Z] 12:06:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:22.012 12:06:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:22.271 [2024-11-19 12:06:25.569639] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:14:22.530 [2024-11-19 12:06:25.779412] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:14:22.789 [2024-11-19 12:06:26.028264] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:14:23.049 101.00 IOPS, 303.00 MiB/s [2024-11-19T12:06:26.426Z] 12:06:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:23.049 12:06:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:23.049 12:06:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:23.049 12:06:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:23.049 12:06:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:23.049 12:06:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:23.049 12:06:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.049 12:06:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.049 12:06:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.049 12:06:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:23.049 12:06:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.049 12:06:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:23.049 "name": "raid_bdev1", 00:14:23.049 "uuid": "6749ceea-aa78-470f-a9a3-d4bf94cadcbf", 00:14:23.049 "strip_size_kb": 0, 00:14:23.049 "state": "online", 00:14:23.049 "raid_level": "raid1", 00:14:23.049 "superblock": false, 00:14:23.049 "num_base_bdevs": 4, 00:14:23.049 "num_base_bdevs_discovered": 3, 00:14:23.049 "num_base_bdevs_operational": 3, 00:14:23.049 "process": { 00:14:23.049 "type": "rebuild", 00:14:23.049 "target": "spare", 00:14:23.049 "progress": { 00:14:23.049 "blocks": 49152, 00:14:23.049 "percent": 75 00:14:23.049 } 00:14:23.049 }, 00:14:23.049 "base_bdevs_list": [ 00:14:23.049 { 00:14:23.049 "name": "spare", 00:14:23.049 "uuid": "e6a4079b-f4b1-58e2-a92c-3d772f9005a4", 00:14:23.049 "is_configured": true, 00:14:23.049 "data_offset": 0, 00:14:23.049 "data_size": 65536 00:14:23.049 }, 00:14:23.049 { 00:14:23.049 "name": null, 00:14:23.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.049 "is_configured": false, 00:14:23.049 "data_offset": 0, 00:14:23.049 "data_size": 65536 00:14:23.049 }, 00:14:23.049 { 00:14:23.049 "name": "BaseBdev3", 00:14:23.049 "uuid": "47d51ad3-d7c8-50f6-90b9-23e95db59ee2", 00:14:23.049 "is_configured": true, 00:14:23.049 "data_offset": 0, 00:14:23.049 "data_size": 65536 00:14:23.049 }, 00:14:23.049 { 00:14:23.049 "name": "BaseBdev4", 00:14:23.049 "uuid": "2212c7bd-38e5-51b0-91d1-a4951769d598", 00:14:23.049 "is_configured": true, 00:14:23.049 "data_offset": 0, 00:14:23.049 "data_size": 65536 00:14:23.049 } 00:14:23.049 ] 00:14:23.049 }' 00:14:23.049 12:06:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:23.308 12:06:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:23.308 12:06:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:23.308 12:06:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:23.308 12:06:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:23.568 [2024-11-19 12:06:26.696935] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:14:23.827 [2024-11-19 12:06:27.127199] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:24.087 [2024-11-19 12:06:27.232587] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:24.087 [2024-11-19 12:06:27.235507] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:24.346 93.71 IOPS, 281.14 MiB/s [2024-11-19T12:06:27.723Z] 12:06:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:24.346 12:06:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:24.346 12:06:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:24.346 12:06:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:24.346 12:06:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:24.346 12:06:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:24.346 12:06:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.346 12:06:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.346 12:06:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.346 12:06:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.346 12:06:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.346 12:06:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:24.346 "name": "raid_bdev1", 00:14:24.346 "uuid": "6749ceea-aa78-470f-a9a3-d4bf94cadcbf", 00:14:24.346 "strip_size_kb": 0, 00:14:24.346 "state": "online", 00:14:24.346 "raid_level": "raid1", 00:14:24.346 "superblock": false, 00:14:24.346 "num_base_bdevs": 4, 00:14:24.346 "num_base_bdevs_discovered": 3, 00:14:24.346 "num_base_bdevs_operational": 3, 00:14:24.346 "base_bdevs_list": [ 00:14:24.346 { 00:14:24.346 "name": "spare", 00:14:24.346 "uuid": "e6a4079b-f4b1-58e2-a92c-3d772f9005a4", 00:14:24.346 "is_configured": true, 00:14:24.346 "data_offset": 0, 00:14:24.346 "data_size": 65536 00:14:24.346 }, 00:14:24.346 { 00:14:24.346 "name": null, 00:14:24.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.346 "is_configured": false, 00:14:24.346 "data_offset": 0, 00:14:24.346 "data_size": 65536 00:14:24.346 }, 00:14:24.346 { 00:14:24.346 "name": "BaseBdev3", 00:14:24.346 "uuid": "47d51ad3-d7c8-50f6-90b9-23e95db59ee2", 00:14:24.346 "is_configured": true, 00:14:24.346 "data_offset": 0, 00:14:24.347 "data_size": 65536 00:14:24.347 }, 00:14:24.347 { 00:14:24.347 "name": "BaseBdev4", 00:14:24.347 "uuid": "2212c7bd-38e5-51b0-91d1-a4951769d598", 00:14:24.347 "is_configured": true, 00:14:24.347 "data_offset": 0, 00:14:24.347 "data_size": 65536 00:14:24.347 } 00:14:24.347 ] 00:14:24.347 }' 00:14:24.347 12:06:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:24.347 12:06:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:24.347 12:06:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:24.347 12:06:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:24.347 12:06:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:14:24.347 12:06:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:24.347 12:06:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:24.347 12:06:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:24.347 12:06:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:24.347 12:06:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:24.347 12:06:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.347 12:06:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.347 12:06:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.347 12:06:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.347 12:06:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.347 12:06:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:24.347 "name": "raid_bdev1", 00:14:24.347 "uuid": "6749ceea-aa78-470f-a9a3-d4bf94cadcbf", 00:14:24.347 "strip_size_kb": 0, 00:14:24.347 "state": "online", 00:14:24.347 "raid_level": "raid1", 00:14:24.347 "superblock": false, 00:14:24.347 "num_base_bdevs": 4, 00:14:24.347 "num_base_bdevs_discovered": 3, 00:14:24.347 "num_base_bdevs_operational": 3, 00:14:24.347 "base_bdevs_list": [ 00:14:24.347 { 00:14:24.347 "name": "spare", 00:14:24.347 "uuid": "e6a4079b-f4b1-58e2-a92c-3d772f9005a4", 00:14:24.347 "is_configured": true, 00:14:24.347 "data_offset": 0, 00:14:24.347 "data_size": 65536 00:14:24.347 }, 00:14:24.347 { 00:14:24.347 "name": null, 00:14:24.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.347 "is_configured": false, 00:14:24.347 "data_offset": 0, 00:14:24.347 "data_size": 65536 00:14:24.347 }, 00:14:24.347 { 00:14:24.347 "name": "BaseBdev3", 00:14:24.347 "uuid": "47d51ad3-d7c8-50f6-90b9-23e95db59ee2", 00:14:24.347 "is_configured": true, 00:14:24.347 "data_offset": 0, 00:14:24.347 "data_size": 65536 00:14:24.347 }, 00:14:24.347 { 00:14:24.347 "name": "BaseBdev4", 00:14:24.347 "uuid": "2212c7bd-38e5-51b0-91d1-a4951769d598", 00:14:24.347 "is_configured": true, 00:14:24.347 "data_offset": 0, 00:14:24.347 "data_size": 65536 00:14:24.347 } 00:14:24.347 ] 00:14:24.347 }' 00:14:24.347 12:06:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:24.347 12:06:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:24.347 12:06:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:24.606 12:06:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:24.606 12:06:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:24.606 12:06:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:24.606 12:06:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:24.606 12:06:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:24.606 12:06:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:24.606 12:06:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:24.606 12:06:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.607 12:06:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.607 12:06:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.607 12:06:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.607 12:06:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.607 12:06:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.607 12:06:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.607 12:06:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.607 12:06:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.607 12:06:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.607 "name": "raid_bdev1", 00:14:24.607 "uuid": "6749ceea-aa78-470f-a9a3-d4bf94cadcbf", 00:14:24.607 "strip_size_kb": 0, 00:14:24.607 "state": "online", 00:14:24.607 "raid_level": "raid1", 00:14:24.607 "superblock": false, 00:14:24.607 "num_base_bdevs": 4, 00:14:24.607 "num_base_bdevs_discovered": 3, 00:14:24.607 "num_base_bdevs_operational": 3, 00:14:24.607 "base_bdevs_list": [ 00:14:24.607 { 00:14:24.607 "name": "spare", 00:14:24.607 "uuid": "e6a4079b-f4b1-58e2-a92c-3d772f9005a4", 00:14:24.607 "is_configured": true, 00:14:24.607 "data_offset": 0, 00:14:24.607 "data_size": 65536 00:14:24.607 }, 00:14:24.607 { 00:14:24.607 "name": null, 00:14:24.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.607 "is_configured": false, 00:14:24.607 "data_offset": 0, 00:14:24.607 "data_size": 65536 00:14:24.607 }, 00:14:24.607 { 00:14:24.607 "name": "BaseBdev3", 00:14:24.607 "uuid": "47d51ad3-d7c8-50f6-90b9-23e95db59ee2", 00:14:24.607 "is_configured": true, 00:14:24.607 "data_offset": 0, 00:14:24.607 "data_size": 65536 00:14:24.607 }, 00:14:24.607 { 00:14:24.607 "name": "BaseBdev4", 00:14:24.607 "uuid": "2212c7bd-38e5-51b0-91d1-a4951769d598", 00:14:24.607 "is_configured": true, 00:14:24.607 "data_offset": 0, 00:14:24.607 "data_size": 65536 00:14:24.607 } 00:14:24.607 ] 00:14:24.607 }' 00:14:24.607 12:06:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.607 12:06:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.867 12:06:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:24.867 12:06:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.867 12:06:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.867 [2024-11-19 12:06:28.187743] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:24.867 [2024-11-19 12:06:28.187832] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:24.867 00:14:24.867 Latency(us) 00:14:24.867 [2024-11-19T12:06:28.244Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:24.867 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:24.867 raid_bdev1 : 7.89 87.25 261.75 0.00 0.00 15701.56 295.13 116762.83 00:14:24.867 [2024-11-19T12:06:28.244Z] =================================================================================================================== 00:14:24.867 [2024-11-19T12:06:28.244Z] Total : 87.25 261.75 0.00 0.00 15701.56 295.13 116762.83 00:14:24.867 [2024-11-19 12:06:28.224651] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:24.867 [2024-11-19 12:06:28.224745] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:24.867 [2024-11-19 12:06:28.224857] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:24.867 [2024-11-19 12:06:28.224923] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:24.867 { 00:14:24.867 "results": [ 00:14:24.867 { 00:14:24.867 "job": "raid_bdev1", 00:14:24.867 "core_mask": "0x1", 00:14:24.867 "workload": "randrw", 00:14:24.867 "percentage": 50, 00:14:24.867 "status": "finished", 00:14:24.867 "queue_depth": 2, 00:14:24.867 "io_size": 3145728, 00:14:24.867 "runtime": 7.885405, 00:14:24.867 "iops": 87.24979883721889, 00:14:24.867 "mibps": 261.74939651165664, 00:14:24.867 "io_failed": 0, 00:14:24.867 "io_timeout": 0, 00:14:24.867 "avg_latency_us": 15701.559947192038, 00:14:24.867 "min_latency_us": 295.12663755458516, 00:14:24.867 "max_latency_us": 116762.82969432314 00:14:24.867 } 00:14:24.867 ], 00:14:24.867 "core_count": 1 00:14:24.867 } 00:14:24.867 12:06:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.867 12:06:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.867 12:06:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:24.867 12:06:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.867 12:06:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:25.124 12:06:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.124 12:06:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:25.124 12:06:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:25.124 12:06:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:25.124 12:06:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:25.124 12:06:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:25.124 12:06:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:25.124 12:06:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:25.124 12:06:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:25.124 12:06:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:25.124 12:06:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:25.124 12:06:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:25.124 12:06:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:25.124 12:06:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:25.125 /dev/nbd0 00:14:25.384 12:06:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:25.384 12:06:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:25.384 12:06:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:25.384 12:06:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:25.384 12:06:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:25.384 12:06:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:25.384 12:06:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:25.384 12:06:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:25.384 12:06:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:25.384 12:06:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:25.384 12:06:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:25.384 1+0 records in 00:14:25.384 1+0 records out 00:14:25.384 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000597369 s, 6.9 MB/s 00:14:25.384 12:06:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:25.384 12:06:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:25.384 12:06:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:25.384 12:06:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:25.384 12:06:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:25.384 12:06:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:25.384 12:06:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:25.384 12:06:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:25.384 12:06:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:14:25.384 12:06:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:14:25.384 12:06:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:25.384 12:06:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:14:25.384 12:06:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:14:25.384 12:06:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:25.384 12:06:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:14:25.384 12:06:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:25.384 12:06:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:25.384 12:06:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:25.384 12:06:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:25.384 12:06:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:25.384 12:06:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:25.384 12:06:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:14:25.384 /dev/nbd1 00:14:25.384 12:06:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:25.384 12:06:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:25.384 12:06:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:25.384 12:06:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:25.384 12:06:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:25.384 12:06:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:25.384 12:06:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:25.643 12:06:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:25.643 12:06:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:25.643 12:06:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:25.643 12:06:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:25.643 1+0 records in 00:14:25.643 1+0 records out 00:14:25.643 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000433998 s, 9.4 MB/s 00:14:25.643 12:06:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:25.643 12:06:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:25.643 12:06:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:25.643 12:06:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:25.643 12:06:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:25.643 12:06:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:25.644 12:06:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:25.644 12:06:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:25.644 12:06:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:25.644 12:06:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:25.644 12:06:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:25.644 12:06:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:25.644 12:06:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:25.644 12:06:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:25.644 12:06:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:25.903 12:06:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:25.903 12:06:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:25.903 12:06:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:25.903 12:06:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:25.903 12:06:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:25.903 12:06:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:25.903 12:06:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:25.903 12:06:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:25.903 12:06:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:25.903 12:06:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:14:25.903 12:06:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:14:25.903 12:06:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:25.903 12:06:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:14:25.903 12:06:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:25.903 12:06:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:25.903 12:06:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:25.903 12:06:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:25.903 12:06:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:25.903 12:06:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:25.903 12:06:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:14:26.162 /dev/nbd1 00:14:26.162 12:06:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:26.162 12:06:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:26.162 12:06:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:26.162 12:06:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:26.162 12:06:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:26.162 12:06:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:26.162 12:06:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:26.162 12:06:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:26.162 12:06:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:26.162 12:06:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:26.162 12:06:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:26.162 1+0 records in 00:14:26.162 1+0 records out 00:14:26.162 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000432029 s, 9.5 MB/s 00:14:26.162 12:06:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:26.162 12:06:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:26.162 12:06:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:26.162 12:06:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:26.162 12:06:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:26.162 12:06:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:26.162 12:06:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:26.162 12:06:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:26.162 12:06:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:26.162 12:06:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:26.162 12:06:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:26.162 12:06:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:26.162 12:06:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:26.162 12:06:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:26.162 12:06:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:26.422 12:06:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:26.422 12:06:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:26.422 12:06:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:26.422 12:06:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:26.422 12:06:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:26.422 12:06:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:26.422 12:06:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:26.422 12:06:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:26.422 12:06:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:26.422 12:06:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:26.422 12:06:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:26.422 12:06:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:26.422 12:06:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:26.422 12:06:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:26.422 12:06:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:26.682 12:06:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:26.682 12:06:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:26.682 12:06:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:26.682 12:06:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:26.682 12:06:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:26.682 12:06:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:26.682 12:06:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:26.682 12:06:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:26.682 12:06:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:26.682 12:06:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 78715 00:14:26.682 12:06:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 78715 ']' 00:14:26.682 12:06:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 78715 00:14:26.682 12:06:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:14:26.682 12:06:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:26.682 12:06:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78715 00:14:26.682 12:06:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:26.682 12:06:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:26.682 killing process with pid 78715 00:14:26.682 Received shutdown signal, test time was about 9.590964 seconds 00:14:26.682 00:14:26.682 Latency(us) 00:14:26.682 [2024-11-19T12:06:30.059Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:26.682 [2024-11-19T12:06:30.059Z] =================================================================================================================== 00:14:26.682 [2024-11-19T12:06:30.059Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:26.682 12:06:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78715' 00:14:26.682 12:06:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 78715 00:14:26.682 [2024-11-19 12:06:29.905918] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:26.682 12:06:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 78715 00:14:26.942 [2024-11-19 12:06:30.299080] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:28.327 12:06:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:28.327 00:14:28.327 real 0m12.938s 00:14:28.327 user 0m16.235s 00:14:28.327 sys 0m1.773s 00:14:28.327 ************************************ 00:14:28.327 END TEST raid_rebuild_test_io 00:14:28.327 ************************************ 00:14:28.327 12:06:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:28.327 12:06:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:28.327 12:06:31 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:14:28.327 12:06:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:28.327 12:06:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:28.327 12:06:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:28.327 ************************************ 00:14:28.327 START TEST raid_rebuild_test_sb_io 00:14:28.327 ************************************ 00:14:28.327 12:06:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:14:28.327 12:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:28.327 12:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:28.327 12:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:28.327 12:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:28.327 12:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:28.327 12:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:28.327 12:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:28.327 12:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:28.327 12:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:28.327 12:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:28.327 12:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:28.327 12:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:28.327 12:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:28.327 12:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:28.327 12:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:28.327 12:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:28.327 12:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:28.327 12:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:28.327 12:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:28.327 12:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:28.327 12:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:28.327 12:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:28.327 12:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:28.327 12:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:28.327 12:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:28.327 12:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:28.327 12:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:28.327 12:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:28.327 12:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:28.327 12:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:28.327 12:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79119 00:14:28.327 12:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:28.328 12:06:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79119 00:14:28.328 12:06:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 79119 ']' 00:14:28.328 12:06:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:28.328 12:06:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:28.328 12:06:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:28.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:28.328 12:06:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:28.328 12:06:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:28.328 [2024-11-19 12:06:31.568883] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:14:28.328 [2024-11-19 12:06:31.569088] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79119 ] 00:14:28.328 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:28.328 Zero copy mechanism will not be used. 00:14:28.603 [2024-11-19 12:06:31.741440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:28.604 [2024-11-19 12:06:31.850261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:28.862 [2024-11-19 12:06:32.043761] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:28.862 [2024-11-19 12:06:32.043859] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:29.122 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:29.122 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:14:29.122 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:29.122 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:29.122 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.122 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.122 BaseBdev1_malloc 00:14:29.122 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.122 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:29.122 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.122 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.122 [2024-11-19 12:06:32.431462] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:29.122 [2024-11-19 12:06:32.431546] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.122 [2024-11-19 12:06:32.431570] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:29.122 [2024-11-19 12:06:32.431581] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.122 [2024-11-19 12:06:32.433630] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.122 [2024-11-19 12:06:32.433668] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:29.122 BaseBdev1 00:14:29.122 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.122 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:29.122 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:29.122 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.122 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.122 BaseBdev2_malloc 00:14:29.122 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.122 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:29.122 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.122 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.122 [2024-11-19 12:06:32.485583] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:29.122 [2024-11-19 12:06:32.485653] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.122 [2024-11-19 12:06:32.485670] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:29.122 [2024-11-19 12:06:32.485682] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.122 [2024-11-19 12:06:32.487723] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.122 [2024-11-19 12:06:32.487760] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:29.122 BaseBdev2 00:14:29.122 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.122 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:29.122 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:29.122 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.122 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.381 BaseBdev3_malloc 00:14:29.382 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.382 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:29.382 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.382 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.382 [2024-11-19 12:06:32.573406] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:29.382 [2024-11-19 12:06:32.573460] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.382 [2024-11-19 12:06:32.573483] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:29.382 [2024-11-19 12:06:32.573497] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.382 [2024-11-19 12:06:32.575711] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.382 [2024-11-19 12:06:32.575746] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:29.382 BaseBdev3 00:14:29.382 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.382 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:29.382 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:29.382 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.382 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.382 BaseBdev4_malloc 00:14:29.382 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.382 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:29.382 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.382 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.382 [2024-11-19 12:06:32.626560] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:29.382 [2024-11-19 12:06:32.626618] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.382 [2024-11-19 12:06:32.626635] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:29.382 [2024-11-19 12:06:32.626645] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.382 [2024-11-19 12:06:32.628604] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.382 [2024-11-19 12:06:32.628640] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:29.382 BaseBdev4 00:14:29.382 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.382 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:29.382 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.382 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.382 spare_malloc 00:14:29.382 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.382 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:29.382 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.382 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.382 spare_delay 00:14:29.382 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.382 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:29.382 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.382 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.382 [2024-11-19 12:06:32.692807] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:29.382 [2024-11-19 12:06:32.692854] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.382 [2024-11-19 12:06:32.692872] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:29.382 [2024-11-19 12:06:32.692882] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.382 [2024-11-19 12:06:32.694862] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.382 [2024-11-19 12:06:32.694898] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:29.382 spare 00:14:29.382 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.382 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:29.382 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.382 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.382 [2024-11-19 12:06:32.704828] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:29.382 [2024-11-19 12:06:32.706564] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:29.382 [2024-11-19 12:06:32.706632] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:29.382 [2024-11-19 12:06:32.706681] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:29.382 [2024-11-19 12:06:32.706867] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:29.382 [2024-11-19 12:06:32.706896] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:29.382 [2024-11-19 12:06:32.707137] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:29.382 [2024-11-19 12:06:32.707327] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:29.382 [2024-11-19 12:06:32.707345] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:29.382 [2024-11-19 12:06:32.707492] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:29.382 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.382 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:29.382 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:29.382 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:29.382 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:29.382 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:29.382 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:29.382 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.382 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.382 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.382 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.382 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.382 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.382 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.382 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.382 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.640 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:29.640 "name": "raid_bdev1", 00:14:29.640 "uuid": "1d2c482a-c0a1-451f-95d8-8133119399f7", 00:14:29.640 "strip_size_kb": 0, 00:14:29.640 "state": "online", 00:14:29.640 "raid_level": "raid1", 00:14:29.640 "superblock": true, 00:14:29.640 "num_base_bdevs": 4, 00:14:29.640 "num_base_bdevs_discovered": 4, 00:14:29.640 "num_base_bdevs_operational": 4, 00:14:29.640 "base_bdevs_list": [ 00:14:29.640 { 00:14:29.640 "name": "BaseBdev1", 00:14:29.640 "uuid": "2d6708ef-61ff-50fe-aac7-9f15f567dcdc", 00:14:29.640 "is_configured": true, 00:14:29.640 "data_offset": 2048, 00:14:29.640 "data_size": 63488 00:14:29.640 }, 00:14:29.640 { 00:14:29.640 "name": "BaseBdev2", 00:14:29.640 "uuid": "01ad22ce-f9d1-5bfb-a925-21556ce126ab", 00:14:29.640 "is_configured": true, 00:14:29.640 "data_offset": 2048, 00:14:29.640 "data_size": 63488 00:14:29.640 }, 00:14:29.640 { 00:14:29.640 "name": "BaseBdev3", 00:14:29.640 "uuid": "282633a0-58b0-5db1-bd9a-558798f14fcc", 00:14:29.640 "is_configured": true, 00:14:29.640 "data_offset": 2048, 00:14:29.640 "data_size": 63488 00:14:29.640 }, 00:14:29.640 { 00:14:29.640 "name": "BaseBdev4", 00:14:29.640 "uuid": "46ec96e3-fd88-5754-9f75-28929feb4978", 00:14:29.640 "is_configured": true, 00:14:29.640 "data_offset": 2048, 00:14:29.640 "data_size": 63488 00:14:29.640 } 00:14:29.640 ] 00:14:29.640 }' 00:14:29.640 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:29.640 12:06:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.900 12:06:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:29.900 12:06:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:29.900 12:06:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.900 12:06:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.900 [2024-11-19 12:06:33.152425] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:29.900 12:06:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.900 12:06:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:29.900 12:06:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.900 12:06:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:29.900 12:06:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.900 12:06:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.900 12:06:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.900 12:06:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:29.900 12:06:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:29.900 12:06:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:29.900 12:06:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:29.900 12:06:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.900 12:06:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.900 [2024-11-19 12:06:33.239917] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:29.900 12:06:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.900 12:06:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:29.900 12:06:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:29.900 12:06:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:29.900 12:06:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:29.900 12:06:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:29.900 12:06:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:29.900 12:06:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.900 12:06:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.900 12:06:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.900 12:06:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.900 12:06:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.900 12:06:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.900 12:06:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.900 12:06:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.900 12:06:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.160 12:06:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.160 "name": "raid_bdev1", 00:14:30.161 "uuid": "1d2c482a-c0a1-451f-95d8-8133119399f7", 00:14:30.161 "strip_size_kb": 0, 00:14:30.161 "state": "online", 00:14:30.161 "raid_level": "raid1", 00:14:30.161 "superblock": true, 00:14:30.161 "num_base_bdevs": 4, 00:14:30.161 "num_base_bdevs_discovered": 3, 00:14:30.161 "num_base_bdevs_operational": 3, 00:14:30.161 "base_bdevs_list": [ 00:14:30.161 { 00:14:30.161 "name": null, 00:14:30.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.161 "is_configured": false, 00:14:30.161 "data_offset": 0, 00:14:30.161 "data_size": 63488 00:14:30.161 }, 00:14:30.161 { 00:14:30.161 "name": "BaseBdev2", 00:14:30.161 "uuid": "01ad22ce-f9d1-5bfb-a925-21556ce126ab", 00:14:30.161 "is_configured": true, 00:14:30.161 "data_offset": 2048, 00:14:30.161 "data_size": 63488 00:14:30.161 }, 00:14:30.161 { 00:14:30.161 "name": "BaseBdev3", 00:14:30.161 "uuid": "282633a0-58b0-5db1-bd9a-558798f14fcc", 00:14:30.161 "is_configured": true, 00:14:30.161 "data_offset": 2048, 00:14:30.161 "data_size": 63488 00:14:30.161 }, 00:14:30.161 { 00:14:30.161 "name": "BaseBdev4", 00:14:30.161 "uuid": "46ec96e3-fd88-5754-9f75-28929feb4978", 00:14:30.161 "is_configured": true, 00:14:30.161 "data_offset": 2048, 00:14:30.161 "data_size": 63488 00:14:30.161 } 00:14:30.161 ] 00:14:30.161 }' 00:14:30.161 12:06:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.161 12:06:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.161 [2024-11-19 12:06:33.339668] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:30.161 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:30.161 Zero copy mechanism will not be used. 00:14:30.161 Running I/O for 60 seconds... 00:14:30.420 12:06:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:30.420 12:06:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.420 12:06:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.420 [2024-11-19 12:06:33.641012] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:30.420 12:06:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.421 12:06:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:30.421 [2024-11-19 12:06:33.711122] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:14:30.421 [2024-11-19 12:06:33.713067] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:30.680 [2024-11-19 12:06:33.834730] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:30.680 [2024-11-19 12:06:33.836060] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:30.939 [2024-11-19 12:06:34.088082] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:31.199 206.00 IOPS, 618.00 MiB/s [2024-11-19T12:06:34.576Z] [2024-11-19 12:06:34.431093] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:31.199 [2024-11-19 12:06:34.431802] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:31.459 12:06:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:31.459 12:06:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:31.459 12:06:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:31.459 12:06:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:31.459 12:06:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:31.459 12:06:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.459 12:06:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.459 12:06:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.459 12:06:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:31.459 12:06:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.459 12:06:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:31.459 "name": "raid_bdev1", 00:14:31.459 "uuid": "1d2c482a-c0a1-451f-95d8-8133119399f7", 00:14:31.459 "strip_size_kb": 0, 00:14:31.459 "state": "online", 00:14:31.459 "raid_level": "raid1", 00:14:31.459 "superblock": true, 00:14:31.459 "num_base_bdevs": 4, 00:14:31.459 "num_base_bdevs_discovered": 4, 00:14:31.459 "num_base_bdevs_operational": 4, 00:14:31.459 "process": { 00:14:31.459 "type": "rebuild", 00:14:31.459 "target": "spare", 00:14:31.459 "progress": { 00:14:31.459 "blocks": 12288, 00:14:31.459 "percent": 19 00:14:31.459 } 00:14:31.459 }, 00:14:31.459 "base_bdevs_list": [ 00:14:31.459 { 00:14:31.459 "name": "spare", 00:14:31.459 "uuid": "b00bc5c8-b7d3-54db-897e-2cd79220b8c4", 00:14:31.459 "is_configured": true, 00:14:31.459 "data_offset": 2048, 00:14:31.459 "data_size": 63488 00:14:31.459 }, 00:14:31.459 { 00:14:31.459 "name": "BaseBdev2", 00:14:31.459 "uuid": "01ad22ce-f9d1-5bfb-a925-21556ce126ab", 00:14:31.459 "is_configured": true, 00:14:31.459 "data_offset": 2048, 00:14:31.459 "data_size": 63488 00:14:31.459 }, 00:14:31.459 { 00:14:31.459 "name": "BaseBdev3", 00:14:31.459 "uuid": "282633a0-58b0-5db1-bd9a-558798f14fcc", 00:14:31.459 "is_configured": true, 00:14:31.459 "data_offset": 2048, 00:14:31.459 "data_size": 63488 00:14:31.459 }, 00:14:31.459 { 00:14:31.459 "name": "BaseBdev4", 00:14:31.459 "uuid": "46ec96e3-fd88-5754-9f75-28929feb4978", 00:14:31.459 "is_configured": true, 00:14:31.459 "data_offset": 2048, 00:14:31.459 "data_size": 63488 00:14:31.459 } 00:14:31.459 ] 00:14:31.459 }' 00:14:31.459 12:06:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:31.459 [2024-11-19 12:06:34.780428] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:31.459 12:06:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:31.459 12:06:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:31.718 12:06:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:31.718 12:06:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:31.718 12:06:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.718 12:06:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:31.718 [2024-11-19 12:06:34.849486] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:31.718 [2024-11-19 12:06:34.902801] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:31.718 [2024-11-19 12:06:35.005077] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:31.718 [2024-11-19 12:06:35.015750] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:31.718 [2024-11-19 12:06:35.015804] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:31.718 [2024-11-19 12:06:35.015818] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:31.718 [2024-11-19 12:06:35.038928] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:14:31.718 12:06:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.718 12:06:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:31.718 12:06:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:31.718 12:06:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:31.718 12:06:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:31.719 12:06:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:31.719 12:06:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:31.719 12:06:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.719 12:06:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.719 12:06:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.719 12:06:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.719 12:06:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.719 12:06:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.719 12:06:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.719 12:06:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:31.719 12:06:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.977 12:06:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.977 "name": "raid_bdev1", 00:14:31.977 "uuid": "1d2c482a-c0a1-451f-95d8-8133119399f7", 00:14:31.977 "strip_size_kb": 0, 00:14:31.977 "state": "online", 00:14:31.977 "raid_level": "raid1", 00:14:31.977 "superblock": true, 00:14:31.977 "num_base_bdevs": 4, 00:14:31.977 "num_base_bdevs_discovered": 3, 00:14:31.977 "num_base_bdevs_operational": 3, 00:14:31.977 "base_bdevs_list": [ 00:14:31.977 { 00:14:31.977 "name": null, 00:14:31.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.977 "is_configured": false, 00:14:31.977 "data_offset": 0, 00:14:31.977 "data_size": 63488 00:14:31.977 }, 00:14:31.977 { 00:14:31.977 "name": "BaseBdev2", 00:14:31.977 "uuid": "01ad22ce-f9d1-5bfb-a925-21556ce126ab", 00:14:31.977 "is_configured": true, 00:14:31.977 "data_offset": 2048, 00:14:31.977 "data_size": 63488 00:14:31.978 }, 00:14:31.978 { 00:14:31.978 "name": "BaseBdev3", 00:14:31.978 "uuid": "282633a0-58b0-5db1-bd9a-558798f14fcc", 00:14:31.978 "is_configured": true, 00:14:31.978 "data_offset": 2048, 00:14:31.978 "data_size": 63488 00:14:31.978 }, 00:14:31.978 { 00:14:31.978 "name": "BaseBdev4", 00:14:31.978 "uuid": "46ec96e3-fd88-5754-9f75-28929feb4978", 00:14:31.978 "is_configured": true, 00:14:31.978 "data_offset": 2048, 00:14:31.978 "data_size": 63488 00:14:31.978 } 00:14:31.978 ] 00:14:31.978 }' 00:14:31.978 12:06:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.978 12:06:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:32.237 184.00 IOPS, 552.00 MiB/s [2024-11-19T12:06:35.614Z] 12:06:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:32.237 12:06:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:32.237 12:06:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:32.237 12:06:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:32.237 12:06:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:32.237 12:06:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.237 12:06:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.237 12:06:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.237 12:06:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:32.237 12:06:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.237 12:06:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:32.237 "name": "raid_bdev1", 00:14:32.237 "uuid": "1d2c482a-c0a1-451f-95d8-8133119399f7", 00:14:32.237 "strip_size_kb": 0, 00:14:32.237 "state": "online", 00:14:32.237 "raid_level": "raid1", 00:14:32.237 "superblock": true, 00:14:32.237 "num_base_bdevs": 4, 00:14:32.237 "num_base_bdevs_discovered": 3, 00:14:32.237 "num_base_bdevs_operational": 3, 00:14:32.237 "base_bdevs_list": [ 00:14:32.237 { 00:14:32.237 "name": null, 00:14:32.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.237 "is_configured": false, 00:14:32.237 "data_offset": 0, 00:14:32.237 "data_size": 63488 00:14:32.237 }, 00:14:32.237 { 00:14:32.237 "name": "BaseBdev2", 00:14:32.237 "uuid": "01ad22ce-f9d1-5bfb-a925-21556ce126ab", 00:14:32.237 "is_configured": true, 00:14:32.237 "data_offset": 2048, 00:14:32.237 "data_size": 63488 00:14:32.237 }, 00:14:32.237 { 00:14:32.237 "name": "BaseBdev3", 00:14:32.237 "uuid": "282633a0-58b0-5db1-bd9a-558798f14fcc", 00:14:32.237 "is_configured": true, 00:14:32.237 "data_offset": 2048, 00:14:32.237 "data_size": 63488 00:14:32.237 }, 00:14:32.237 { 00:14:32.237 "name": "BaseBdev4", 00:14:32.237 "uuid": "46ec96e3-fd88-5754-9f75-28929feb4978", 00:14:32.237 "is_configured": true, 00:14:32.237 "data_offset": 2048, 00:14:32.237 "data_size": 63488 00:14:32.237 } 00:14:32.237 ] 00:14:32.237 }' 00:14:32.237 12:06:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:32.237 12:06:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:32.237 12:06:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:32.237 12:06:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:32.237 12:06:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:32.237 12:06:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.237 12:06:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:32.237 [2024-11-19 12:06:35.586590] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:32.497 12:06:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.497 12:06:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:32.497 [2024-11-19 12:06:35.664722] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:32.497 [2024-11-19 12:06:35.666670] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:32.497 [2024-11-19 12:06:35.782294] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:32.497 [2024-11-19 12:06:35.783784] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:32.756 [2024-11-19 12:06:36.000660] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:32.756 [2024-11-19 12:06:36.000873] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:33.016 [2024-11-19 12:06:36.248862] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:33.275 170.33 IOPS, 511.00 MiB/s [2024-11-19T12:06:36.652Z] [2024-11-19 12:06:36.479818] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:33.275 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:33.275 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:33.275 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:33.275 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:33.275 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:33.275 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.275 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.275 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.275 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:33.534 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.534 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:33.534 "name": "raid_bdev1", 00:14:33.534 "uuid": "1d2c482a-c0a1-451f-95d8-8133119399f7", 00:14:33.534 "strip_size_kb": 0, 00:14:33.534 "state": "online", 00:14:33.534 "raid_level": "raid1", 00:14:33.534 "superblock": true, 00:14:33.534 "num_base_bdevs": 4, 00:14:33.534 "num_base_bdevs_discovered": 4, 00:14:33.534 "num_base_bdevs_operational": 4, 00:14:33.534 "process": { 00:14:33.534 "type": "rebuild", 00:14:33.534 "target": "spare", 00:14:33.534 "progress": { 00:14:33.534 "blocks": 10240, 00:14:33.534 "percent": 16 00:14:33.534 } 00:14:33.534 }, 00:14:33.534 "base_bdevs_list": [ 00:14:33.534 { 00:14:33.534 "name": "spare", 00:14:33.534 "uuid": "b00bc5c8-b7d3-54db-897e-2cd79220b8c4", 00:14:33.534 "is_configured": true, 00:14:33.534 "data_offset": 2048, 00:14:33.534 "data_size": 63488 00:14:33.534 }, 00:14:33.534 { 00:14:33.534 "name": "BaseBdev2", 00:14:33.534 "uuid": "01ad22ce-f9d1-5bfb-a925-21556ce126ab", 00:14:33.534 "is_configured": true, 00:14:33.534 "data_offset": 2048, 00:14:33.534 "data_size": 63488 00:14:33.534 }, 00:14:33.534 { 00:14:33.534 "name": "BaseBdev3", 00:14:33.534 "uuid": "282633a0-58b0-5db1-bd9a-558798f14fcc", 00:14:33.534 "is_configured": true, 00:14:33.534 "data_offset": 2048, 00:14:33.534 "data_size": 63488 00:14:33.534 }, 00:14:33.534 { 00:14:33.534 "name": "BaseBdev4", 00:14:33.534 "uuid": "46ec96e3-fd88-5754-9f75-28929feb4978", 00:14:33.534 "is_configured": true, 00:14:33.534 "data_offset": 2048, 00:14:33.534 "data_size": 63488 00:14:33.534 } 00:14:33.534 ] 00:14:33.534 }' 00:14:33.534 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:33.534 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:33.534 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:33.534 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:33.534 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:33.535 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:33.535 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:33.535 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:33.535 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:33.535 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:33.535 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:33.535 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.535 12:06:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:33.535 [2024-11-19 12:06:36.801443] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:33.535 [2024-11-19 12:06:36.805696] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:33.794 [2024-11-19 12:06:37.013278] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:14:33.794 [2024-11-19 12:06:37.013311] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:14:33.794 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.794 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:33.794 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:33.794 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:33.794 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:33.794 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:33.794 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:33.794 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:33.794 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.794 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.794 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.794 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:33.794 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.794 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:33.794 "name": "raid_bdev1", 00:14:33.794 "uuid": "1d2c482a-c0a1-451f-95d8-8133119399f7", 00:14:33.794 "strip_size_kb": 0, 00:14:33.794 "state": "online", 00:14:33.794 "raid_level": "raid1", 00:14:33.794 "superblock": true, 00:14:33.794 "num_base_bdevs": 4, 00:14:33.794 "num_base_bdevs_discovered": 3, 00:14:33.794 "num_base_bdevs_operational": 3, 00:14:33.794 "process": { 00:14:33.794 "type": "rebuild", 00:14:33.794 "target": "spare", 00:14:33.794 "progress": { 00:14:33.794 "blocks": 14336, 00:14:33.794 "percent": 22 00:14:33.794 } 00:14:33.794 }, 00:14:33.794 "base_bdevs_list": [ 00:14:33.794 { 00:14:33.794 "name": "spare", 00:14:33.795 "uuid": "b00bc5c8-b7d3-54db-897e-2cd79220b8c4", 00:14:33.795 "is_configured": true, 00:14:33.795 "data_offset": 2048, 00:14:33.795 "data_size": 63488 00:14:33.795 }, 00:14:33.795 { 00:14:33.795 "name": null, 00:14:33.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.795 "is_configured": false, 00:14:33.795 "data_offset": 0, 00:14:33.795 "data_size": 63488 00:14:33.795 }, 00:14:33.795 { 00:14:33.795 "name": "BaseBdev3", 00:14:33.795 "uuid": "282633a0-58b0-5db1-bd9a-558798f14fcc", 00:14:33.795 "is_configured": true, 00:14:33.795 "data_offset": 2048, 00:14:33.795 "data_size": 63488 00:14:33.795 }, 00:14:33.795 { 00:14:33.795 "name": "BaseBdev4", 00:14:33.795 "uuid": "46ec96e3-fd88-5754-9f75-28929feb4978", 00:14:33.795 "is_configured": true, 00:14:33.795 "data_offset": 2048, 00:14:33.795 "data_size": 63488 00:14:33.795 } 00:14:33.795 ] 00:14:33.795 }' 00:14:33.795 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:33.795 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:33.795 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:33.795 [2024-11-19 12:06:37.138153] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:34.054 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:34.054 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=490 00:14:34.054 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:34.054 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:34.054 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:34.054 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:34.054 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:34.054 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:34.054 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.055 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.055 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.055 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:34.055 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.055 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:34.055 "name": "raid_bdev1", 00:14:34.055 "uuid": "1d2c482a-c0a1-451f-95d8-8133119399f7", 00:14:34.055 "strip_size_kb": 0, 00:14:34.055 "state": "online", 00:14:34.055 "raid_level": "raid1", 00:14:34.055 "superblock": true, 00:14:34.055 "num_base_bdevs": 4, 00:14:34.055 "num_base_bdevs_discovered": 3, 00:14:34.055 "num_base_bdevs_operational": 3, 00:14:34.055 "process": { 00:14:34.055 "type": "rebuild", 00:14:34.055 "target": "spare", 00:14:34.055 "progress": { 00:14:34.055 "blocks": 16384, 00:14:34.055 "percent": 25 00:14:34.055 } 00:14:34.055 }, 00:14:34.055 "base_bdevs_list": [ 00:14:34.055 { 00:14:34.055 "name": "spare", 00:14:34.055 "uuid": "b00bc5c8-b7d3-54db-897e-2cd79220b8c4", 00:14:34.055 "is_configured": true, 00:14:34.055 "data_offset": 2048, 00:14:34.055 "data_size": 63488 00:14:34.055 }, 00:14:34.055 { 00:14:34.055 "name": null, 00:14:34.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.055 "is_configured": false, 00:14:34.055 "data_offset": 0, 00:14:34.055 "data_size": 63488 00:14:34.055 }, 00:14:34.055 { 00:14:34.055 "name": "BaseBdev3", 00:14:34.055 "uuid": "282633a0-58b0-5db1-bd9a-558798f14fcc", 00:14:34.055 "is_configured": true, 00:14:34.055 "data_offset": 2048, 00:14:34.055 "data_size": 63488 00:14:34.055 }, 00:14:34.055 { 00:14:34.055 "name": "BaseBdev4", 00:14:34.055 "uuid": "46ec96e3-fd88-5754-9f75-28929feb4978", 00:14:34.055 "is_configured": true, 00:14:34.055 "data_offset": 2048, 00:14:34.055 "data_size": 63488 00:14:34.055 } 00:14:34.055 ] 00:14:34.055 }' 00:14:34.055 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:34.055 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:34.055 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:34.055 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:34.055 12:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:34.624 139.25 IOPS, 417.75 MiB/s [2024-11-19T12:06:38.001Z] [2024-11-19 12:06:37.893218] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:35.193 12:06:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:35.193 12:06:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:35.193 12:06:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:35.193 12:06:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:35.193 12:06:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:35.193 12:06:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:35.193 12:06:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.193 12:06:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.193 12:06:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.193 12:06:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.193 12:06:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.193 [2024-11-19 12:06:38.331455] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:14:35.193 [2024-11-19 12:06:38.331979] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:14:35.193 12:06:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:35.193 "name": "raid_bdev1", 00:14:35.193 "uuid": "1d2c482a-c0a1-451f-95d8-8133119399f7", 00:14:35.193 "strip_size_kb": 0, 00:14:35.193 "state": "online", 00:14:35.193 "raid_level": "raid1", 00:14:35.193 "superblock": true, 00:14:35.193 "num_base_bdevs": 4, 00:14:35.193 "num_base_bdevs_discovered": 3, 00:14:35.193 "num_base_bdevs_operational": 3, 00:14:35.193 "process": { 00:14:35.193 "type": "rebuild", 00:14:35.193 "target": "spare", 00:14:35.193 "progress": { 00:14:35.193 "blocks": 32768, 00:14:35.193 "percent": 51 00:14:35.193 } 00:14:35.193 }, 00:14:35.193 "base_bdevs_list": [ 00:14:35.193 { 00:14:35.193 "name": "spare", 00:14:35.193 "uuid": "b00bc5c8-b7d3-54db-897e-2cd79220b8c4", 00:14:35.193 "is_configured": true, 00:14:35.193 "data_offset": 2048, 00:14:35.193 "data_size": 63488 00:14:35.193 }, 00:14:35.193 { 00:14:35.193 "name": null, 00:14:35.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.193 "is_configured": false, 00:14:35.193 "data_offset": 0, 00:14:35.193 "data_size": 63488 00:14:35.193 }, 00:14:35.193 { 00:14:35.193 "name": "BaseBdev3", 00:14:35.193 "uuid": "282633a0-58b0-5db1-bd9a-558798f14fcc", 00:14:35.193 "is_configured": true, 00:14:35.193 "data_offset": 2048, 00:14:35.193 "data_size": 63488 00:14:35.193 }, 00:14:35.193 { 00:14:35.193 "name": "BaseBdev4", 00:14:35.193 "uuid": "46ec96e3-fd88-5754-9f75-28929feb4978", 00:14:35.193 "is_configured": true, 00:14:35.193 "data_offset": 2048, 00:14:35.193 "data_size": 63488 00:14:35.193 } 00:14:35.193 ] 00:14:35.193 }' 00:14:35.193 12:06:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:35.193 123.00 IOPS, 369.00 MiB/s [2024-11-19T12:06:38.570Z] 12:06:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:35.193 12:06:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:35.193 12:06:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:35.193 12:06:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:35.452 [2024-11-19 12:06:38.785529] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:14:36.020 [2024-11-19 12:06:39.108586] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:14:36.279 107.33 IOPS, 322.00 MiB/s [2024-11-19T12:06:39.656Z] 12:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:36.279 12:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:36.279 12:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:36.279 12:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:36.279 12:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:36.279 12:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:36.279 12:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.279 12:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.279 12:06:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.279 12:06:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.279 12:06:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.279 12:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:36.279 "name": "raid_bdev1", 00:14:36.279 "uuid": "1d2c482a-c0a1-451f-95d8-8133119399f7", 00:14:36.279 "strip_size_kb": 0, 00:14:36.279 "state": "online", 00:14:36.279 "raid_level": "raid1", 00:14:36.279 "superblock": true, 00:14:36.279 "num_base_bdevs": 4, 00:14:36.279 "num_base_bdevs_discovered": 3, 00:14:36.279 "num_base_bdevs_operational": 3, 00:14:36.279 "process": { 00:14:36.279 "type": "rebuild", 00:14:36.279 "target": "spare", 00:14:36.279 "progress": { 00:14:36.279 "blocks": 51200, 00:14:36.279 "percent": 80 00:14:36.279 } 00:14:36.279 }, 00:14:36.279 "base_bdevs_list": [ 00:14:36.279 { 00:14:36.279 "name": "spare", 00:14:36.279 "uuid": "b00bc5c8-b7d3-54db-897e-2cd79220b8c4", 00:14:36.279 "is_configured": true, 00:14:36.279 "data_offset": 2048, 00:14:36.279 "data_size": 63488 00:14:36.279 }, 00:14:36.279 { 00:14:36.279 "name": null, 00:14:36.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.279 "is_configured": false, 00:14:36.279 "data_offset": 0, 00:14:36.279 "data_size": 63488 00:14:36.279 }, 00:14:36.279 { 00:14:36.279 "name": "BaseBdev3", 00:14:36.279 "uuid": "282633a0-58b0-5db1-bd9a-558798f14fcc", 00:14:36.279 "is_configured": true, 00:14:36.280 "data_offset": 2048, 00:14:36.280 "data_size": 63488 00:14:36.280 }, 00:14:36.280 { 00:14:36.280 "name": "BaseBdev4", 00:14:36.280 "uuid": "46ec96e3-fd88-5754-9f75-28929feb4978", 00:14:36.280 "is_configured": true, 00:14:36.280 "data_offset": 2048, 00:14:36.280 "data_size": 63488 00:14:36.280 } 00:14:36.280 ] 00:14:36.280 }' 00:14:36.280 12:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:36.280 12:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:36.280 12:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:36.280 12:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:36.280 12:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:36.538 [2024-11-19 12:06:39.761951] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:14:36.797 [2024-11-19 12:06:40.092657] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:37.056 [2024-11-19 12:06:40.192522] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:37.056 [2024-11-19 12:06:40.194858] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:37.315 97.71 IOPS, 293.14 MiB/s [2024-11-19T12:06:40.692Z] 12:06:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:37.315 12:06:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:37.315 12:06:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:37.315 12:06:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:37.315 12:06:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:37.315 12:06:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:37.315 12:06:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.315 12:06:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.315 12:06:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.315 12:06:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.315 12:06:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.315 12:06:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:37.315 "name": "raid_bdev1", 00:14:37.315 "uuid": "1d2c482a-c0a1-451f-95d8-8133119399f7", 00:14:37.315 "strip_size_kb": 0, 00:14:37.315 "state": "online", 00:14:37.315 "raid_level": "raid1", 00:14:37.315 "superblock": true, 00:14:37.315 "num_base_bdevs": 4, 00:14:37.315 "num_base_bdevs_discovered": 3, 00:14:37.315 "num_base_bdevs_operational": 3, 00:14:37.315 "base_bdevs_list": [ 00:14:37.315 { 00:14:37.315 "name": "spare", 00:14:37.315 "uuid": "b00bc5c8-b7d3-54db-897e-2cd79220b8c4", 00:14:37.315 "is_configured": true, 00:14:37.316 "data_offset": 2048, 00:14:37.316 "data_size": 63488 00:14:37.316 }, 00:14:37.316 { 00:14:37.316 "name": null, 00:14:37.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.316 "is_configured": false, 00:14:37.316 "data_offset": 0, 00:14:37.316 "data_size": 63488 00:14:37.316 }, 00:14:37.316 { 00:14:37.316 "name": "BaseBdev3", 00:14:37.316 "uuid": "282633a0-58b0-5db1-bd9a-558798f14fcc", 00:14:37.316 "is_configured": true, 00:14:37.316 "data_offset": 2048, 00:14:37.316 "data_size": 63488 00:14:37.316 }, 00:14:37.316 { 00:14:37.316 "name": "BaseBdev4", 00:14:37.316 "uuid": "46ec96e3-fd88-5754-9f75-28929feb4978", 00:14:37.316 "is_configured": true, 00:14:37.316 "data_offset": 2048, 00:14:37.316 "data_size": 63488 00:14:37.316 } 00:14:37.316 ] 00:14:37.316 }' 00:14:37.316 12:06:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:37.576 12:06:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:37.576 12:06:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:37.576 12:06:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:37.576 12:06:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:14:37.576 12:06:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:37.576 12:06:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:37.576 12:06:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:37.576 12:06:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:37.576 12:06:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:37.576 12:06:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.576 12:06:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.576 12:06:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.576 12:06:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.576 12:06:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.576 12:06:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:37.576 "name": "raid_bdev1", 00:14:37.576 "uuid": "1d2c482a-c0a1-451f-95d8-8133119399f7", 00:14:37.576 "strip_size_kb": 0, 00:14:37.576 "state": "online", 00:14:37.576 "raid_level": "raid1", 00:14:37.576 "superblock": true, 00:14:37.576 "num_base_bdevs": 4, 00:14:37.576 "num_base_bdevs_discovered": 3, 00:14:37.576 "num_base_bdevs_operational": 3, 00:14:37.576 "base_bdevs_list": [ 00:14:37.576 { 00:14:37.576 "name": "spare", 00:14:37.576 "uuid": "b00bc5c8-b7d3-54db-897e-2cd79220b8c4", 00:14:37.576 "is_configured": true, 00:14:37.576 "data_offset": 2048, 00:14:37.576 "data_size": 63488 00:14:37.576 }, 00:14:37.576 { 00:14:37.576 "name": null, 00:14:37.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.576 "is_configured": false, 00:14:37.576 "data_offset": 0, 00:14:37.576 "data_size": 63488 00:14:37.576 }, 00:14:37.576 { 00:14:37.576 "name": "BaseBdev3", 00:14:37.576 "uuid": "282633a0-58b0-5db1-bd9a-558798f14fcc", 00:14:37.576 "is_configured": true, 00:14:37.576 "data_offset": 2048, 00:14:37.576 "data_size": 63488 00:14:37.576 }, 00:14:37.576 { 00:14:37.576 "name": "BaseBdev4", 00:14:37.576 "uuid": "46ec96e3-fd88-5754-9f75-28929feb4978", 00:14:37.576 "is_configured": true, 00:14:37.576 "data_offset": 2048, 00:14:37.576 "data_size": 63488 00:14:37.576 } 00:14:37.576 ] 00:14:37.576 }' 00:14:37.576 12:06:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:37.576 12:06:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:37.576 12:06:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:37.576 12:06:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:37.576 12:06:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:37.576 12:06:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:37.576 12:06:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:37.576 12:06:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:37.576 12:06:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:37.576 12:06:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:37.576 12:06:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.576 12:06:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.576 12:06:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.576 12:06:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.576 12:06:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.576 12:06:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.576 12:06:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.576 12:06:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.576 12:06:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.576 12:06:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.576 "name": "raid_bdev1", 00:14:37.576 "uuid": "1d2c482a-c0a1-451f-95d8-8133119399f7", 00:14:37.576 "strip_size_kb": 0, 00:14:37.576 "state": "online", 00:14:37.576 "raid_level": "raid1", 00:14:37.576 "superblock": true, 00:14:37.576 "num_base_bdevs": 4, 00:14:37.576 "num_base_bdevs_discovered": 3, 00:14:37.576 "num_base_bdevs_operational": 3, 00:14:37.576 "base_bdevs_list": [ 00:14:37.576 { 00:14:37.576 "name": "spare", 00:14:37.576 "uuid": "b00bc5c8-b7d3-54db-897e-2cd79220b8c4", 00:14:37.576 "is_configured": true, 00:14:37.576 "data_offset": 2048, 00:14:37.576 "data_size": 63488 00:14:37.576 }, 00:14:37.576 { 00:14:37.576 "name": null, 00:14:37.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.576 "is_configured": false, 00:14:37.576 "data_offset": 0, 00:14:37.576 "data_size": 63488 00:14:37.576 }, 00:14:37.576 { 00:14:37.576 "name": "BaseBdev3", 00:14:37.576 "uuid": "282633a0-58b0-5db1-bd9a-558798f14fcc", 00:14:37.576 "is_configured": true, 00:14:37.576 "data_offset": 2048, 00:14:37.576 "data_size": 63488 00:14:37.576 }, 00:14:37.576 { 00:14:37.576 "name": "BaseBdev4", 00:14:37.576 "uuid": "46ec96e3-fd88-5754-9f75-28929feb4978", 00:14:37.576 "is_configured": true, 00:14:37.576 "data_offset": 2048, 00:14:37.576 "data_size": 63488 00:14:37.576 } 00:14:37.576 ] 00:14:37.576 }' 00:14:37.576 12:06:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.576 12:06:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:38.145 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:38.145 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.145 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:38.145 [2024-11-19 12:06:41.266238] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:38.145 [2024-11-19 12:06:41.266271] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:38.145 89.12 IOPS, 267.38 MiB/s 00:14:38.145 Latency(us) 00:14:38.145 [2024-11-19T12:06:41.522Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:38.145 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:38.145 raid_bdev1 : 8.04 88.81 266.42 0.00 0.00 15739.84 284.39 110810.21 00:14:38.145 [2024-11-19T12:06:41.522Z] =================================================================================================================== 00:14:38.145 [2024-11-19T12:06:41.522Z] Total : 88.81 266.42 0.00 0.00 15739.84 284.39 110810.21 00:14:38.145 [2024-11-19 12:06:41.386069] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:38.145 [2024-11-19 12:06:41.386106] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:38.145 [2024-11-19 12:06:41.386205] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:38.145 [2024-11-19 12:06:41.386214] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:38.145 { 00:14:38.145 "results": [ 00:14:38.145 { 00:14:38.145 "job": "raid_bdev1", 00:14:38.145 "core_mask": "0x1", 00:14:38.145 "workload": "randrw", 00:14:38.145 "percentage": 50, 00:14:38.145 "status": "finished", 00:14:38.145 "queue_depth": 2, 00:14:38.145 "io_size": 3145728, 00:14:38.145 "runtime": 8.039833, 00:14:38.145 "iops": 88.80781478918779, 00:14:38.145 "mibps": 266.4234443675634, 00:14:38.145 "io_failed": 0, 00:14:38.145 "io_timeout": 0, 00:14:38.145 "avg_latency_us": 15739.836825559922, 00:14:38.145 "min_latency_us": 284.3947598253275, 00:14:38.145 "max_latency_us": 110810.21484716157 00:14:38.145 } 00:14:38.145 ], 00:14:38.145 "core_count": 1 00:14:38.145 } 00:14:38.145 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.145 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.145 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.145 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:38.145 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:38.145 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.145 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:38.145 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:38.145 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:38.145 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:38.145 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:38.145 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:38.146 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:38.146 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:38.146 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:38.146 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:38.146 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:38.146 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:38.146 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:38.404 /dev/nbd0 00:14:38.404 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:38.404 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:38.404 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:38.404 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:38.404 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:38.404 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:38.404 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:38.404 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:38.404 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:38.404 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:38.404 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:38.404 1+0 records in 00:14:38.404 1+0 records out 00:14:38.404 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000246346 s, 16.6 MB/s 00:14:38.404 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:38.404 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:38.404 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:38.404 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:38.405 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:38.405 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:38.405 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:38.405 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:38.405 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:14:38.405 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:14:38.405 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:38.405 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:14:38.405 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:14:38.405 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:38.405 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:14:38.405 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:38.405 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:38.405 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:38.405 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:38.405 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:38.405 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:38.405 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:14:38.664 /dev/nbd1 00:14:38.664 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:38.664 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:38.664 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:38.664 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:38.664 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:38.664 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:38.664 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:38.664 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:38.664 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:38.664 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:38.664 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:38.664 1+0 records in 00:14:38.664 1+0 records out 00:14:38.664 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000468693 s, 8.7 MB/s 00:14:38.664 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:38.664 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:38.664 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:38.664 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:38.664 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:38.664 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:38.664 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:38.664 12:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:38.924 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:38.924 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:38.924 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:38.924 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:38.924 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:38.924 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:38.924 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:38.924 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:38.924 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:38.924 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:38.924 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:38.924 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:38.924 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:38.924 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:38.924 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:38.924 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:38.924 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:14:38.924 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:14:38.924 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:38.924 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:14:38.924 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:38.924 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:38.924 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:38.924 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:38.924 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:38.924 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:38.924 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:14:39.183 /dev/nbd1 00:14:39.183 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:39.183 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:39.183 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:39.183 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:39.183 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:39.183 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:39.184 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:39.184 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:39.184 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:39.184 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:39.184 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:39.184 1+0 records in 00:14:39.184 1+0 records out 00:14:39.184 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000412847 s, 9.9 MB/s 00:14:39.184 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:39.184 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:39.184 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:39.184 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:39.184 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:39.184 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:39.184 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:39.184 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:39.443 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:39.443 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:39.443 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:39.443 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:39.443 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:39.443 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:39.443 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:39.443 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:39.443 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:39.443 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:39.443 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:39.443 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:39.443 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:39.443 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:39.443 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:39.443 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:39.443 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:39.443 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:39.443 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:39.443 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:39.443 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:39.443 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:39.703 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:39.703 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:39.703 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:39.703 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:39.703 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:39.703 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:39.703 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:39.703 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:39.703 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:39.703 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:39.703 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.703 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:39.703 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.703 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:39.703 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.703 12:06:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:39.703 [2024-11-19 12:06:43.000077] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:39.703 [2024-11-19 12:06:43.000129] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:39.703 [2024-11-19 12:06:43.000152] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:14:39.703 [2024-11-19 12:06:43.000161] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:39.703 [2024-11-19 12:06:43.002269] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:39.703 [2024-11-19 12:06:43.002310] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:39.703 [2024-11-19 12:06:43.002404] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:39.703 [2024-11-19 12:06:43.002460] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:39.703 [2024-11-19 12:06:43.002616] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:39.703 [2024-11-19 12:06:43.002712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:39.703 spare 00:14:39.703 12:06:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.703 12:06:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:39.703 12:06:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.703 12:06:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:39.962 [2024-11-19 12:06:43.102601] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:39.963 [2024-11-19 12:06:43.102626] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:39.963 [2024-11-19 12:06:43.102906] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:14:39.963 [2024-11-19 12:06:43.103100] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:39.963 [2024-11-19 12:06:43.103133] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:39.963 [2024-11-19 12:06:43.103299] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:39.963 12:06:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.963 12:06:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:39.963 12:06:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:39.963 12:06:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:39.963 12:06:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:39.963 12:06:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:39.963 12:06:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:39.963 12:06:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.963 12:06:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.963 12:06:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.963 12:06:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.963 12:06:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.963 12:06:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.963 12:06:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.963 12:06:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:39.963 12:06:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.963 12:06:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:39.963 "name": "raid_bdev1", 00:14:39.963 "uuid": "1d2c482a-c0a1-451f-95d8-8133119399f7", 00:14:39.963 "strip_size_kb": 0, 00:14:39.963 "state": "online", 00:14:39.963 "raid_level": "raid1", 00:14:39.963 "superblock": true, 00:14:39.963 "num_base_bdevs": 4, 00:14:39.963 "num_base_bdevs_discovered": 3, 00:14:39.963 "num_base_bdevs_operational": 3, 00:14:39.963 "base_bdevs_list": [ 00:14:39.963 { 00:14:39.963 "name": "spare", 00:14:39.963 "uuid": "b00bc5c8-b7d3-54db-897e-2cd79220b8c4", 00:14:39.963 "is_configured": true, 00:14:39.963 "data_offset": 2048, 00:14:39.963 "data_size": 63488 00:14:39.963 }, 00:14:39.963 { 00:14:39.963 "name": null, 00:14:39.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.963 "is_configured": false, 00:14:39.963 "data_offset": 2048, 00:14:39.963 "data_size": 63488 00:14:39.963 }, 00:14:39.963 { 00:14:39.963 "name": "BaseBdev3", 00:14:39.963 "uuid": "282633a0-58b0-5db1-bd9a-558798f14fcc", 00:14:39.963 "is_configured": true, 00:14:39.963 "data_offset": 2048, 00:14:39.963 "data_size": 63488 00:14:39.963 }, 00:14:39.963 { 00:14:39.963 "name": "BaseBdev4", 00:14:39.963 "uuid": "46ec96e3-fd88-5754-9f75-28929feb4978", 00:14:39.963 "is_configured": true, 00:14:39.963 "data_offset": 2048, 00:14:39.963 "data_size": 63488 00:14:39.963 } 00:14:39.963 ] 00:14:39.963 }' 00:14:39.963 12:06:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:39.963 12:06:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:40.221 12:06:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:40.221 12:06:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:40.221 12:06:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:40.221 12:06:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:40.221 12:06:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:40.221 12:06:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.221 12:06:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.221 12:06:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.221 12:06:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:40.221 12:06:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.481 12:06:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:40.481 "name": "raid_bdev1", 00:14:40.481 "uuid": "1d2c482a-c0a1-451f-95d8-8133119399f7", 00:14:40.481 "strip_size_kb": 0, 00:14:40.481 "state": "online", 00:14:40.481 "raid_level": "raid1", 00:14:40.481 "superblock": true, 00:14:40.481 "num_base_bdevs": 4, 00:14:40.481 "num_base_bdevs_discovered": 3, 00:14:40.481 "num_base_bdevs_operational": 3, 00:14:40.481 "base_bdevs_list": [ 00:14:40.481 { 00:14:40.481 "name": "spare", 00:14:40.481 "uuid": "b00bc5c8-b7d3-54db-897e-2cd79220b8c4", 00:14:40.481 "is_configured": true, 00:14:40.481 "data_offset": 2048, 00:14:40.481 "data_size": 63488 00:14:40.481 }, 00:14:40.481 { 00:14:40.481 "name": null, 00:14:40.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.481 "is_configured": false, 00:14:40.481 "data_offset": 2048, 00:14:40.481 "data_size": 63488 00:14:40.481 }, 00:14:40.481 { 00:14:40.481 "name": "BaseBdev3", 00:14:40.481 "uuid": "282633a0-58b0-5db1-bd9a-558798f14fcc", 00:14:40.481 "is_configured": true, 00:14:40.481 "data_offset": 2048, 00:14:40.481 "data_size": 63488 00:14:40.481 }, 00:14:40.481 { 00:14:40.481 "name": "BaseBdev4", 00:14:40.481 "uuid": "46ec96e3-fd88-5754-9f75-28929feb4978", 00:14:40.481 "is_configured": true, 00:14:40.481 "data_offset": 2048, 00:14:40.481 "data_size": 63488 00:14:40.481 } 00:14:40.481 ] 00:14:40.481 }' 00:14:40.481 12:06:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:40.481 12:06:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:40.481 12:06:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:40.481 12:06:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:40.481 12:06:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.481 12:06:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:40.481 12:06:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.481 12:06:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:40.481 12:06:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.481 12:06:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:40.481 12:06:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:40.481 12:06:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.481 12:06:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:40.481 [2024-11-19 12:06:43.778886] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:40.481 12:06:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.481 12:06:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:40.481 12:06:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:40.481 12:06:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:40.481 12:06:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:40.481 12:06:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:40.481 12:06:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:40.481 12:06:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.481 12:06:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.481 12:06:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.481 12:06:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.481 12:06:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.481 12:06:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.481 12:06:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.481 12:06:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:40.481 12:06:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.481 12:06:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.481 "name": "raid_bdev1", 00:14:40.481 "uuid": "1d2c482a-c0a1-451f-95d8-8133119399f7", 00:14:40.481 "strip_size_kb": 0, 00:14:40.481 "state": "online", 00:14:40.481 "raid_level": "raid1", 00:14:40.481 "superblock": true, 00:14:40.481 "num_base_bdevs": 4, 00:14:40.481 "num_base_bdevs_discovered": 2, 00:14:40.481 "num_base_bdevs_operational": 2, 00:14:40.481 "base_bdevs_list": [ 00:14:40.481 { 00:14:40.481 "name": null, 00:14:40.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.481 "is_configured": false, 00:14:40.481 "data_offset": 0, 00:14:40.481 "data_size": 63488 00:14:40.481 }, 00:14:40.481 { 00:14:40.481 "name": null, 00:14:40.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.481 "is_configured": false, 00:14:40.481 "data_offset": 2048, 00:14:40.481 "data_size": 63488 00:14:40.481 }, 00:14:40.481 { 00:14:40.481 "name": "BaseBdev3", 00:14:40.481 "uuid": "282633a0-58b0-5db1-bd9a-558798f14fcc", 00:14:40.481 "is_configured": true, 00:14:40.481 "data_offset": 2048, 00:14:40.481 "data_size": 63488 00:14:40.481 }, 00:14:40.481 { 00:14:40.481 "name": "BaseBdev4", 00:14:40.481 "uuid": "46ec96e3-fd88-5754-9f75-28929feb4978", 00:14:40.481 "is_configured": true, 00:14:40.481 "data_offset": 2048, 00:14:40.481 "data_size": 63488 00:14:40.481 } 00:14:40.481 ] 00:14:40.481 }' 00:14:40.481 12:06:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.481 12:06:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:41.051 12:06:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:41.051 12:06:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.051 12:06:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:41.051 [2024-11-19 12:06:44.206228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:41.051 [2024-11-19 12:06:44.206437] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:41.051 [2024-11-19 12:06:44.206451] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:41.051 [2024-11-19 12:06:44.206489] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:41.051 [2024-11-19 12:06:44.220825] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:14:41.051 12:06:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.051 12:06:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:41.051 [2024-11-19 12:06:44.222639] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:41.988 12:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:41.988 12:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:41.988 12:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:41.988 12:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:41.988 12:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:41.988 12:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.988 12:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.988 12:06:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.988 12:06:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:41.988 12:06:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.988 12:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:41.988 "name": "raid_bdev1", 00:14:41.988 "uuid": "1d2c482a-c0a1-451f-95d8-8133119399f7", 00:14:41.988 "strip_size_kb": 0, 00:14:41.988 "state": "online", 00:14:41.988 "raid_level": "raid1", 00:14:41.988 "superblock": true, 00:14:41.988 "num_base_bdevs": 4, 00:14:41.988 "num_base_bdevs_discovered": 3, 00:14:41.988 "num_base_bdevs_operational": 3, 00:14:41.988 "process": { 00:14:41.988 "type": "rebuild", 00:14:41.988 "target": "spare", 00:14:41.988 "progress": { 00:14:41.988 "blocks": 20480, 00:14:41.988 "percent": 32 00:14:41.988 } 00:14:41.988 }, 00:14:41.988 "base_bdevs_list": [ 00:14:41.988 { 00:14:41.988 "name": "spare", 00:14:41.988 "uuid": "b00bc5c8-b7d3-54db-897e-2cd79220b8c4", 00:14:41.988 "is_configured": true, 00:14:41.988 "data_offset": 2048, 00:14:41.988 "data_size": 63488 00:14:41.988 }, 00:14:41.988 { 00:14:41.988 "name": null, 00:14:41.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.988 "is_configured": false, 00:14:41.988 "data_offset": 2048, 00:14:41.988 "data_size": 63488 00:14:41.988 }, 00:14:41.988 { 00:14:41.988 "name": "BaseBdev3", 00:14:41.988 "uuid": "282633a0-58b0-5db1-bd9a-558798f14fcc", 00:14:41.988 "is_configured": true, 00:14:41.988 "data_offset": 2048, 00:14:41.988 "data_size": 63488 00:14:41.988 }, 00:14:41.988 { 00:14:41.988 "name": "BaseBdev4", 00:14:41.988 "uuid": "46ec96e3-fd88-5754-9f75-28929feb4978", 00:14:41.988 "is_configured": true, 00:14:41.988 "data_offset": 2048, 00:14:41.988 "data_size": 63488 00:14:41.988 } 00:14:41.988 ] 00:14:41.988 }' 00:14:41.988 12:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:41.988 12:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:41.988 12:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:42.248 12:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:42.248 12:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:42.248 12:06:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.248 12:06:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.248 [2024-11-19 12:06:45.382524] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:42.248 [2024-11-19 12:06:45.427468] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:42.248 [2024-11-19 12:06:45.427523] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:42.248 [2024-11-19 12:06:45.427541] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:42.248 [2024-11-19 12:06:45.427548] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:42.248 12:06:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.248 12:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:42.248 12:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:42.248 12:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:42.248 12:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:42.248 12:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:42.248 12:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:42.248 12:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.248 12:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.248 12:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.248 12:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.248 12:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.248 12:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.248 12:06:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.248 12:06:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.248 12:06:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.248 12:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.248 "name": "raid_bdev1", 00:14:42.248 "uuid": "1d2c482a-c0a1-451f-95d8-8133119399f7", 00:14:42.248 "strip_size_kb": 0, 00:14:42.248 "state": "online", 00:14:42.248 "raid_level": "raid1", 00:14:42.248 "superblock": true, 00:14:42.248 "num_base_bdevs": 4, 00:14:42.248 "num_base_bdevs_discovered": 2, 00:14:42.248 "num_base_bdevs_operational": 2, 00:14:42.248 "base_bdevs_list": [ 00:14:42.248 { 00:14:42.248 "name": null, 00:14:42.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.248 "is_configured": false, 00:14:42.248 "data_offset": 0, 00:14:42.248 "data_size": 63488 00:14:42.248 }, 00:14:42.248 { 00:14:42.248 "name": null, 00:14:42.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.248 "is_configured": false, 00:14:42.248 "data_offset": 2048, 00:14:42.248 "data_size": 63488 00:14:42.248 }, 00:14:42.248 { 00:14:42.248 "name": "BaseBdev3", 00:14:42.248 "uuid": "282633a0-58b0-5db1-bd9a-558798f14fcc", 00:14:42.248 "is_configured": true, 00:14:42.248 "data_offset": 2048, 00:14:42.248 "data_size": 63488 00:14:42.248 }, 00:14:42.248 { 00:14:42.248 "name": "BaseBdev4", 00:14:42.248 "uuid": "46ec96e3-fd88-5754-9f75-28929feb4978", 00:14:42.248 "is_configured": true, 00:14:42.248 "data_offset": 2048, 00:14:42.248 "data_size": 63488 00:14:42.248 } 00:14:42.248 ] 00:14:42.248 }' 00:14:42.248 12:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.248 12:06:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.817 12:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:42.817 12:06:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.817 12:06:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.817 [2024-11-19 12:06:45.894735] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:42.817 [2024-11-19 12:06:45.894813] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:42.817 [2024-11-19 12:06:45.894844] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:14:42.817 [2024-11-19 12:06:45.894854] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:42.817 [2024-11-19 12:06:45.895351] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:42.817 [2024-11-19 12:06:45.895378] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:42.817 [2024-11-19 12:06:45.895473] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:42.817 [2024-11-19 12:06:45.895491] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:42.818 [2024-11-19 12:06:45.895506] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:42.818 [2024-11-19 12:06:45.895526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:42.818 [2024-11-19 12:06:45.909931] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:14:42.818 spare 00:14:42.818 12:06:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.818 12:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:42.818 [2024-11-19 12:06:45.911757] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:43.756 12:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:43.756 12:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:43.756 12:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:43.756 12:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:43.756 12:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:43.756 12:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.756 12:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.756 12:06:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.756 12:06:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.756 12:06:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.756 12:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:43.756 "name": "raid_bdev1", 00:14:43.756 "uuid": "1d2c482a-c0a1-451f-95d8-8133119399f7", 00:14:43.756 "strip_size_kb": 0, 00:14:43.756 "state": "online", 00:14:43.756 "raid_level": "raid1", 00:14:43.756 "superblock": true, 00:14:43.756 "num_base_bdevs": 4, 00:14:43.756 "num_base_bdevs_discovered": 3, 00:14:43.756 "num_base_bdevs_operational": 3, 00:14:43.756 "process": { 00:14:43.756 "type": "rebuild", 00:14:43.756 "target": "spare", 00:14:43.756 "progress": { 00:14:43.756 "blocks": 20480, 00:14:43.756 "percent": 32 00:14:43.756 } 00:14:43.756 }, 00:14:43.756 "base_bdevs_list": [ 00:14:43.756 { 00:14:43.756 "name": "spare", 00:14:43.756 "uuid": "b00bc5c8-b7d3-54db-897e-2cd79220b8c4", 00:14:43.756 "is_configured": true, 00:14:43.756 "data_offset": 2048, 00:14:43.756 "data_size": 63488 00:14:43.756 }, 00:14:43.756 { 00:14:43.756 "name": null, 00:14:43.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.756 "is_configured": false, 00:14:43.756 "data_offset": 2048, 00:14:43.756 "data_size": 63488 00:14:43.756 }, 00:14:43.756 { 00:14:43.756 "name": "BaseBdev3", 00:14:43.756 "uuid": "282633a0-58b0-5db1-bd9a-558798f14fcc", 00:14:43.756 "is_configured": true, 00:14:43.756 "data_offset": 2048, 00:14:43.756 "data_size": 63488 00:14:43.756 }, 00:14:43.756 { 00:14:43.756 "name": "BaseBdev4", 00:14:43.756 "uuid": "46ec96e3-fd88-5754-9f75-28929feb4978", 00:14:43.756 "is_configured": true, 00:14:43.756 "data_offset": 2048, 00:14:43.756 "data_size": 63488 00:14:43.756 } 00:14:43.756 ] 00:14:43.756 }' 00:14:43.756 12:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:43.756 12:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:43.756 12:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:43.756 12:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:43.756 12:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:43.756 12:06:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.756 12:06:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.756 [2024-11-19 12:06:47.075608] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:43.756 [2024-11-19 12:06:47.116511] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:43.756 [2024-11-19 12:06:47.116597] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:43.756 [2024-11-19 12:06:47.116614] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:43.756 [2024-11-19 12:06:47.116623] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:44.016 12:06:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.016 12:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:44.016 12:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:44.016 12:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:44.016 12:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:44.016 12:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:44.016 12:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:44.016 12:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.016 12:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.016 12:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.016 12:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.016 12:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.016 12:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.016 12:06:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.016 12:06:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.016 12:06:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.016 12:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.016 "name": "raid_bdev1", 00:14:44.016 "uuid": "1d2c482a-c0a1-451f-95d8-8133119399f7", 00:14:44.016 "strip_size_kb": 0, 00:14:44.016 "state": "online", 00:14:44.016 "raid_level": "raid1", 00:14:44.016 "superblock": true, 00:14:44.016 "num_base_bdevs": 4, 00:14:44.016 "num_base_bdevs_discovered": 2, 00:14:44.016 "num_base_bdevs_operational": 2, 00:14:44.016 "base_bdevs_list": [ 00:14:44.016 { 00:14:44.016 "name": null, 00:14:44.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.016 "is_configured": false, 00:14:44.016 "data_offset": 0, 00:14:44.016 "data_size": 63488 00:14:44.016 }, 00:14:44.016 { 00:14:44.016 "name": null, 00:14:44.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.016 "is_configured": false, 00:14:44.016 "data_offset": 2048, 00:14:44.016 "data_size": 63488 00:14:44.016 }, 00:14:44.016 { 00:14:44.016 "name": "BaseBdev3", 00:14:44.016 "uuid": "282633a0-58b0-5db1-bd9a-558798f14fcc", 00:14:44.016 "is_configured": true, 00:14:44.016 "data_offset": 2048, 00:14:44.016 "data_size": 63488 00:14:44.016 }, 00:14:44.016 { 00:14:44.016 "name": "BaseBdev4", 00:14:44.016 "uuid": "46ec96e3-fd88-5754-9f75-28929feb4978", 00:14:44.016 "is_configured": true, 00:14:44.016 "data_offset": 2048, 00:14:44.016 "data_size": 63488 00:14:44.016 } 00:14:44.016 ] 00:14:44.016 }' 00:14:44.016 12:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.016 12:06:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.276 12:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:44.276 12:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:44.276 12:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:44.276 12:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:44.276 12:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:44.276 12:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.276 12:06:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.276 12:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.276 12:06:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.276 12:06:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.276 12:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:44.276 "name": "raid_bdev1", 00:14:44.276 "uuid": "1d2c482a-c0a1-451f-95d8-8133119399f7", 00:14:44.276 "strip_size_kb": 0, 00:14:44.276 "state": "online", 00:14:44.276 "raid_level": "raid1", 00:14:44.276 "superblock": true, 00:14:44.276 "num_base_bdevs": 4, 00:14:44.276 "num_base_bdevs_discovered": 2, 00:14:44.276 "num_base_bdevs_operational": 2, 00:14:44.276 "base_bdevs_list": [ 00:14:44.276 { 00:14:44.276 "name": null, 00:14:44.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.276 "is_configured": false, 00:14:44.276 "data_offset": 0, 00:14:44.276 "data_size": 63488 00:14:44.276 }, 00:14:44.276 { 00:14:44.276 "name": null, 00:14:44.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.276 "is_configured": false, 00:14:44.276 "data_offset": 2048, 00:14:44.276 "data_size": 63488 00:14:44.276 }, 00:14:44.276 { 00:14:44.276 "name": "BaseBdev3", 00:14:44.276 "uuid": "282633a0-58b0-5db1-bd9a-558798f14fcc", 00:14:44.276 "is_configured": true, 00:14:44.276 "data_offset": 2048, 00:14:44.276 "data_size": 63488 00:14:44.276 }, 00:14:44.276 { 00:14:44.276 "name": "BaseBdev4", 00:14:44.276 "uuid": "46ec96e3-fd88-5754-9f75-28929feb4978", 00:14:44.276 "is_configured": true, 00:14:44.276 "data_offset": 2048, 00:14:44.276 "data_size": 63488 00:14:44.276 } 00:14:44.276 ] 00:14:44.276 }' 00:14:44.276 12:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:44.276 12:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:44.276 12:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:44.276 12:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:44.276 12:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:44.276 12:06:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.276 12:06:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.276 12:06:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.276 12:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:44.276 12:06:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.276 12:06:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.276 [2024-11-19 12:06:47.632023] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:44.276 [2024-11-19 12:06:47.632086] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:44.276 [2024-11-19 12:06:47.632107] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:14:44.276 [2024-11-19 12:06:47.632120] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.276 [2024-11-19 12:06:47.632583] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.276 [2024-11-19 12:06:47.632614] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:44.276 [2024-11-19 12:06:47.632692] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:44.276 [2024-11-19 12:06:47.632716] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:44.276 [2024-11-19 12:06:47.632725] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:44.276 [2024-11-19 12:06:47.632736] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:44.276 BaseBdev1 00:14:44.276 12:06:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.276 12:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:45.281 12:06:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:45.281 12:06:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:45.281 12:06:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:45.281 12:06:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:45.281 12:06:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:45.281 12:06:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:45.281 12:06:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.281 12:06:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.281 12:06:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.281 12:06:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.281 12:06:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.281 12:06:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.281 12:06:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.281 12:06:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.541 12:06:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.541 12:06:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.541 "name": "raid_bdev1", 00:14:45.541 "uuid": "1d2c482a-c0a1-451f-95d8-8133119399f7", 00:14:45.541 "strip_size_kb": 0, 00:14:45.541 "state": "online", 00:14:45.541 "raid_level": "raid1", 00:14:45.541 "superblock": true, 00:14:45.541 "num_base_bdevs": 4, 00:14:45.541 "num_base_bdevs_discovered": 2, 00:14:45.541 "num_base_bdevs_operational": 2, 00:14:45.541 "base_bdevs_list": [ 00:14:45.541 { 00:14:45.541 "name": null, 00:14:45.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.541 "is_configured": false, 00:14:45.541 "data_offset": 0, 00:14:45.541 "data_size": 63488 00:14:45.541 }, 00:14:45.541 { 00:14:45.541 "name": null, 00:14:45.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.541 "is_configured": false, 00:14:45.541 "data_offset": 2048, 00:14:45.541 "data_size": 63488 00:14:45.541 }, 00:14:45.541 { 00:14:45.541 "name": "BaseBdev3", 00:14:45.541 "uuid": "282633a0-58b0-5db1-bd9a-558798f14fcc", 00:14:45.541 "is_configured": true, 00:14:45.541 "data_offset": 2048, 00:14:45.541 "data_size": 63488 00:14:45.541 }, 00:14:45.541 { 00:14:45.541 "name": "BaseBdev4", 00:14:45.541 "uuid": "46ec96e3-fd88-5754-9f75-28929feb4978", 00:14:45.541 "is_configured": true, 00:14:45.541 "data_offset": 2048, 00:14:45.541 "data_size": 63488 00:14:45.541 } 00:14:45.541 ] 00:14:45.541 }' 00:14:45.541 12:06:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.541 12:06:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.799 12:06:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:45.799 12:06:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:45.799 12:06:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:45.799 12:06:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:45.799 12:06:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:45.799 12:06:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.799 12:06:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.799 12:06:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.799 12:06:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.799 12:06:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.799 12:06:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:45.799 "name": "raid_bdev1", 00:14:45.799 "uuid": "1d2c482a-c0a1-451f-95d8-8133119399f7", 00:14:45.799 "strip_size_kb": 0, 00:14:45.799 "state": "online", 00:14:45.799 "raid_level": "raid1", 00:14:45.799 "superblock": true, 00:14:45.799 "num_base_bdevs": 4, 00:14:45.799 "num_base_bdevs_discovered": 2, 00:14:45.799 "num_base_bdevs_operational": 2, 00:14:45.799 "base_bdevs_list": [ 00:14:45.799 { 00:14:45.799 "name": null, 00:14:45.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.799 "is_configured": false, 00:14:45.799 "data_offset": 0, 00:14:45.799 "data_size": 63488 00:14:45.799 }, 00:14:45.799 { 00:14:45.799 "name": null, 00:14:45.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.799 "is_configured": false, 00:14:45.799 "data_offset": 2048, 00:14:45.799 "data_size": 63488 00:14:45.799 }, 00:14:45.799 { 00:14:45.799 "name": "BaseBdev3", 00:14:45.799 "uuid": "282633a0-58b0-5db1-bd9a-558798f14fcc", 00:14:45.799 "is_configured": true, 00:14:45.799 "data_offset": 2048, 00:14:45.799 "data_size": 63488 00:14:45.799 }, 00:14:45.799 { 00:14:45.799 "name": "BaseBdev4", 00:14:45.799 "uuid": "46ec96e3-fd88-5754-9f75-28929feb4978", 00:14:45.799 "is_configured": true, 00:14:45.799 "data_offset": 2048, 00:14:45.799 "data_size": 63488 00:14:45.799 } 00:14:45.799 ] 00:14:45.799 }' 00:14:45.799 12:06:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:46.058 12:06:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:46.058 12:06:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:46.058 12:06:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:46.058 12:06:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:46.058 12:06:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:14:46.059 12:06:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:46.059 12:06:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:46.059 12:06:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:46.059 12:06:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:46.059 12:06:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:46.059 12:06:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:46.059 12:06:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.059 12:06:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.059 [2024-11-19 12:06:49.229588] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:46.059 [2024-11-19 12:06:49.229777] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:46.059 [2024-11-19 12:06:49.229789] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:46.059 request: 00:14:46.059 { 00:14:46.059 "base_bdev": "BaseBdev1", 00:14:46.059 "raid_bdev": "raid_bdev1", 00:14:46.059 "method": "bdev_raid_add_base_bdev", 00:14:46.059 "req_id": 1 00:14:46.059 } 00:14:46.059 Got JSON-RPC error response 00:14:46.059 response: 00:14:46.059 { 00:14:46.059 "code": -22, 00:14:46.059 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:46.059 } 00:14:46.059 12:06:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:46.059 12:06:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:14:46.059 12:06:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:46.059 12:06:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:46.059 12:06:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:46.059 12:06:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:46.995 12:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:46.995 12:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:46.996 12:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:46.996 12:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:46.996 12:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:46.996 12:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:46.996 12:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.996 12:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.996 12:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.996 12:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.996 12:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.996 12:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.996 12:06:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.996 12:06:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.996 12:06:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.996 12:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.996 "name": "raid_bdev1", 00:14:46.996 "uuid": "1d2c482a-c0a1-451f-95d8-8133119399f7", 00:14:46.996 "strip_size_kb": 0, 00:14:46.996 "state": "online", 00:14:46.996 "raid_level": "raid1", 00:14:46.996 "superblock": true, 00:14:46.996 "num_base_bdevs": 4, 00:14:46.996 "num_base_bdevs_discovered": 2, 00:14:46.996 "num_base_bdevs_operational": 2, 00:14:46.996 "base_bdevs_list": [ 00:14:46.996 { 00:14:46.996 "name": null, 00:14:46.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.996 "is_configured": false, 00:14:46.996 "data_offset": 0, 00:14:46.996 "data_size": 63488 00:14:46.996 }, 00:14:46.996 { 00:14:46.996 "name": null, 00:14:46.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.996 "is_configured": false, 00:14:46.996 "data_offset": 2048, 00:14:46.996 "data_size": 63488 00:14:46.996 }, 00:14:46.996 { 00:14:46.996 "name": "BaseBdev3", 00:14:46.996 "uuid": "282633a0-58b0-5db1-bd9a-558798f14fcc", 00:14:46.996 "is_configured": true, 00:14:46.996 "data_offset": 2048, 00:14:46.996 "data_size": 63488 00:14:46.996 }, 00:14:46.996 { 00:14:46.996 "name": "BaseBdev4", 00:14:46.996 "uuid": "46ec96e3-fd88-5754-9f75-28929feb4978", 00:14:46.996 "is_configured": true, 00:14:46.996 "data_offset": 2048, 00:14:46.996 "data_size": 63488 00:14:46.996 } 00:14:46.996 ] 00:14:46.996 }' 00:14:46.996 12:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.996 12:06:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:47.565 12:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:47.565 12:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:47.565 12:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:47.565 12:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:47.565 12:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:47.565 12:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.565 12:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.565 12:06:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.565 12:06:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:47.565 12:06:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.565 12:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:47.565 "name": "raid_bdev1", 00:14:47.565 "uuid": "1d2c482a-c0a1-451f-95d8-8133119399f7", 00:14:47.565 "strip_size_kb": 0, 00:14:47.565 "state": "online", 00:14:47.565 "raid_level": "raid1", 00:14:47.565 "superblock": true, 00:14:47.565 "num_base_bdevs": 4, 00:14:47.565 "num_base_bdevs_discovered": 2, 00:14:47.565 "num_base_bdevs_operational": 2, 00:14:47.565 "base_bdevs_list": [ 00:14:47.565 { 00:14:47.565 "name": null, 00:14:47.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.565 "is_configured": false, 00:14:47.565 "data_offset": 0, 00:14:47.565 "data_size": 63488 00:14:47.565 }, 00:14:47.565 { 00:14:47.565 "name": null, 00:14:47.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.565 "is_configured": false, 00:14:47.565 "data_offset": 2048, 00:14:47.565 "data_size": 63488 00:14:47.565 }, 00:14:47.565 { 00:14:47.565 "name": "BaseBdev3", 00:14:47.565 "uuid": "282633a0-58b0-5db1-bd9a-558798f14fcc", 00:14:47.565 "is_configured": true, 00:14:47.565 "data_offset": 2048, 00:14:47.565 "data_size": 63488 00:14:47.565 }, 00:14:47.565 { 00:14:47.565 "name": "BaseBdev4", 00:14:47.565 "uuid": "46ec96e3-fd88-5754-9f75-28929feb4978", 00:14:47.565 "is_configured": true, 00:14:47.565 "data_offset": 2048, 00:14:47.565 "data_size": 63488 00:14:47.565 } 00:14:47.565 ] 00:14:47.565 }' 00:14:47.565 12:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:47.565 12:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:47.565 12:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:47.565 12:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:47.565 12:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79119 00:14:47.565 12:06:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 79119 ']' 00:14:47.565 12:06:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 79119 00:14:47.565 12:06:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:14:47.565 12:06:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:47.565 12:06:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79119 00:14:47.565 12:06:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:47.565 12:06:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:47.565 killing process with pid 79119 00:14:47.565 12:06:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79119' 00:14:47.565 12:06:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 79119 00:14:47.565 Received shutdown signal, test time was about 17.589972 seconds 00:14:47.565 00:14:47.565 Latency(us) 00:14:47.565 [2024-11-19T12:06:50.942Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:47.565 [2024-11-19T12:06:50.942Z] =================================================================================================================== 00:14:47.565 [2024-11-19T12:06:50.942Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:47.565 12:06:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 79119 00:14:47.565 [2024-11-19 12:06:50.897752] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:47.565 [2024-11-19 12:06:50.897873] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:47.565 [2024-11-19 12:06:50.897950] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:47.565 [2024-11-19 12:06:50.897959] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:48.135 [2024-11-19 12:06:51.294021] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:49.074 12:06:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:49.074 00:14:49.074 real 0m20.911s 00:14:49.074 user 0m27.292s 00:14:49.074 sys 0m2.441s 00:14:49.074 12:06:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:49.074 12:06:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.074 ************************************ 00:14:49.074 END TEST raid_rebuild_test_sb_io 00:14:49.074 ************************************ 00:14:49.074 12:06:52 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:14:49.074 12:06:52 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:14:49.074 12:06:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:49.074 12:06:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:49.074 12:06:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:49.333 ************************************ 00:14:49.333 START TEST raid5f_state_function_test 00:14:49.333 ************************************ 00:14:49.333 12:06:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:14:49.333 12:06:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:49.333 12:06:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:49.333 12:06:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:49.333 12:06:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:49.333 12:06:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:49.333 12:06:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:49.333 12:06:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:49.333 12:06:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:49.333 12:06:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:49.333 12:06:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:49.333 12:06:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:49.333 12:06:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:49.333 12:06:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:49.333 12:06:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:49.333 12:06:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:49.333 12:06:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:49.333 12:06:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:49.333 12:06:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:49.333 12:06:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:49.333 12:06:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:49.333 12:06:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:49.333 12:06:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:49.333 12:06:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:49.333 12:06:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:49.333 12:06:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:49.333 12:06:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:49.333 12:06:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=79836 00:14:49.333 12:06:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:49.333 Process raid pid: 79836 00:14:49.333 12:06:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 79836' 00:14:49.333 12:06:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 79836 00:14:49.333 12:06:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 79836 ']' 00:14:49.333 12:06:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:49.334 12:06:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:49.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:49.334 12:06:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:49.334 12:06:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:49.334 12:06:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.334 [2024-11-19 12:06:52.552733] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:14:49.334 [2024-11-19 12:06:52.552855] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:49.593 [2024-11-19 12:06:52.723441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.593 [2024-11-19 12:06:52.828955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.853 [2024-11-19 12:06:53.030794] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:49.853 [2024-11-19 12:06:53.030826] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:50.112 12:06:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:50.112 12:06:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:14:50.112 12:06:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:50.112 12:06:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.112 12:06:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.112 [2024-11-19 12:06:53.363336] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:50.112 [2024-11-19 12:06:53.363385] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:50.112 [2024-11-19 12:06:53.363394] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:50.112 [2024-11-19 12:06:53.363419] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:50.112 [2024-11-19 12:06:53.363426] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:50.112 [2024-11-19 12:06:53.363434] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:50.112 12:06:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.112 12:06:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:50.112 12:06:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:50.112 12:06:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:50.112 12:06:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:50.112 12:06:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:50.112 12:06:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:50.112 12:06:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.112 12:06:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.112 12:06:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.112 12:06:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.112 12:06:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:50.112 12:06:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.112 12:06:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.112 12:06:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.112 12:06:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.112 12:06:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.112 "name": "Existed_Raid", 00:14:50.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.112 "strip_size_kb": 64, 00:14:50.112 "state": "configuring", 00:14:50.112 "raid_level": "raid5f", 00:14:50.112 "superblock": false, 00:14:50.112 "num_base_bdevs": 3, 00:14:50.112 "num_base_bdevs_discovered": 0, 00:14:50.112 "num_base_bdevs_operational": 3, 00:14:50.112 "base_bdevs_list": [ 00:14:50.112 { 00:14:50.112 "name": "BaseBdev1", 00:14:50.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.112 "is_configured": false, 00:14:50.112 "data_offset": 0, 00:14:50.112 "data_size": 0 00:14:50.112 }, 00:14:50.112 { 00:14:50.112 "name": "BaseBdev2", 00:14:50.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.112 "is_configured": false, 00:14:50.112 "data_offset": 0, 00:14:50.112 "data_size": 0 00:14:50.112 }, 00:14:50.112 { 00:14:50.112 "name": "BaseBdev3", 00:14:50.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.112 "is_configured": false, 00:14:50.112 "data_offset": 0, 00:14:50.112 "data_size": 0 00:14:50.112 } 00:14:50.112 ] 00:14:50.112 }' 00:14:50.112 12:06:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.112 12:06:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.681 12:06:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:50.681 12:06:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.681 12:06:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.681 [2024-11-19 12:06:53.802527] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:50.681 [2024-11-19 12:06:53.802565] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:50.681 12:06:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.681 12:06:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:50.681 12:06:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.681 12:06:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.681 [2024-11-19 12:06:53.814513] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:50.681 [2024-11-19 12:06:53.814554] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:50.681 [2024-11-19 12:06:53.814562] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:50.681 [2024-11-19 12:06:53.814588] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:50.681 [2024-11-19 12:06:53.814594] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:50.681 [2024-11-19 12:06:53.814602] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:50.681 12:06:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.681 12:06:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:50.681 12:06:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.681 12:06:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.681 [2024-11-19 12:06:53.861043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:50.681 BaseBdev1 00:14:50.681 12:06:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.681 12:06:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:50.681 12:06:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:50.681 12:06:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:50.681 12:06:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:50.681 12:06:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:50.681 12:06:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:50.681 12:06:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:50.681 12:06:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.681 12:06:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.681 12:06:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.681 12:06:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:50.681 12:06:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.681 12:06:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.681 [ 00:14:50.681 { 00:14:50.681 "name": "BaseBdev1", 00:14:50.681 "aliases": [ 00:14:50.681 "0a41e2d9-0b4c-4811-8021-2a571d336d1a" 00:14:50.681 ], 00:14:50.681 "product_name": "Malloc disk", 00:14:50.681 "block_size": 512, 00:14:50.681 "num_blocks": 65536, 00:14:50.681 "uuid": "0a41e2d9-0b4c-4811-8021-2a571d336d1a", 00:14:50.681 "assigned_rate_limits": { 00:14:50.681 "rw_ios_per_sec": 0, 00:14:50.681 "rw_mbytes_per_sec": 0, 00:14:50.681 "r_mbytes_per_sec": 0, 00:14:50.681 "w_mbytes_per_sec": 0 00:14:50.681 }, 00:14:50.681 "claimed": true, 00:14:50.681 "claim_type": "exclusive_write", 00:14:50.681 "zoned": false, 00:14:50.681 "supported_io_types": { 00:14:50.681 "read": true, 00:14:50.681 "write": true, 00:14:50.681 "unmap": true, 00:14:50.681 "flush": true, 00:14:50.681 "reset": true, 00:14:50.681 "nvme_admin": false, 00:14:50.681 "nvme_io": false, 00:14:50.681 "nvme_io_md": false, 00:14:50.681 "write_zeroes": true, 00:14:50.681 "zcopy": true, 00:14:50.681 "get_zone_info": false, 00:14:50.681 "zone_management": false, 00:14:50.681 "zone_append": false, 00:14:50.681 "compare": false, 00:14:50.681 "compare_and_write": false, 00:14:50.681 "abort": true, 00:14:50.681 "seek_hole": false, 00:14:50.681 "seek_data": false, 00:14:50.681 "copy": true, 00:14:50.681 "nvme_iov_md": false 00:14:50.681 }, 00:14:50.681 "memory_domains": [ 00:14:50.681 { 00:14:50.681 "dma_device_id": "system", 00:14:50.681 "dma_device_type": 1 00:14:50.681 }, 00:14:50.681 { 00:14:50.681 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:50.681 "dma_device_type": 2 00:14:50.681 } 00:14:50.681 ], 00:14:50.681 "driver_specific": {} 00:14:50.681 } 00:14:50.681 ] 00:14:50.681 12:06:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.681 12:06:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:50.681 12:06:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:50.681 12:06:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:50.681 12:06:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:50.681 12:06:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:50.681 12:06:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:50.681 12:06:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:50.681 12:06:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.681 12:06:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.681 12:06:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.681 12:06:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.681 12:06:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.681 12:06:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:50.681 12:06:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.681 12:06:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.681 12:06:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.681 12:06:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.681 "name": "Existed_Raid", 00:14:50.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.681 "strip_size_kb": 64, 00:14:50.681 "state": "configuring", 00:14:50.681 "raid_level": "raid5f", 00:14:50.681 "superblock": false, 00:14:50.681 "num_base_bdevs": 3, 00:14:50.681 "num_base_bdevs_discovered": 1, 00:14:50.681 "num_base_bdevs_operational": 3, 00:14:50.681 "base_bdevs_list": [ 00:14:50.681 { 00:14:50.681 "name": "BaseBdev1", 00:14:50.681 "uuid": "0a41e2d9-0b4c-4811-8021-2a571d336d1a", 00:14:50.681 "is_configured": true, 00:14:50.681 "data_offset": 0, 00:14:50.681 "data_size": 65536 00:14:50.681 }, 00:14:50.681 { 00:14:50.681 "name": "BaseBdev2", 00:14:50.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.681 "is_configured": false, 00:14:50.681 "data_offset": 0, 00:14:50.681 "data_size": 0 00:14:50.681 }, 00:14:50.682 { 00:14:50.682 "name": "BaseBdev3", 00:14:50.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.682 "is_configured": false, 00:14:50.682 "data_offset": 0, 00:14:50.682 "data_size": 0 00:14:50.682 } 00:14:50.682 ] 00:14:50.682 }' 00:14:50.682 12:06:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.682 12:06:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.941 12:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:50.941 12:06:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.941 12:06:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.941 [2024-11-19 12:06:54.280365] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:50.941 [2024-11-19 12:06:54.280417] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:50.941 12:06:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.941 12:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:50.941 12:06:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.941 12:06:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.941 [2024-11-19 12:06:54.292364] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:50.941 [2024-11-19 12:06:54.294141] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:50.941 [2024-11-19 12:06:54.294180] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:50.941 [2024-11-19 12:06:54.294189] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:50.941 [2024-11-19 12:06:54.294198] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:50.941 12:06:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.941 12:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:50.941 12:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:50.941 12:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:50.941 12:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:50.941 12:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:50.941 12:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:50.941 12:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:50.941 12:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:50.941 12:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.941 12:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.941 12:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.941 12:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.941 12:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.941 12:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:50.941 12:06:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.941 12:06:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.200 12:06:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.200 12:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.200 "name": "Existed_Raid", 00:14:51.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.200 "strip_size_kb": 64, 00:14:51.200 "state": "configuring", 00:14:51.200 "raid_level": "raid5f", 00:14:51.200 "superblock": false, 00:14:51.200 "num_base_bdevs": 3, 00:14:51.200 "num_base_bdevs_discovered": 1, 00:14:51.200 "num_base_bdevs_operational": 3, 00:14:51.200 "base_bdevs_list": [ 00:14:51.200 { 00:14:51.200 "name": "BaseBdev1", 00:14:51.200 "uuid": "0a41e2d9-0b4c-4811-8021-2a571d336d1a", 00:14:51.200 "is_configured": true, 00:14:51.200 "data_offset": 0, 00:14:51.200 "data_size": 65536 00:14:51.200 }, 00:14:51.200 { 00:14:51.200 "name": "BaseBdev2", 00:14:51.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.200 "is_configured": false, 00:14:51.200 "data_offset": 0, 00:14:51.200 "data_size": 0 00:14:51.200 }, 00:14:51.200 { 00:14:51.200 "name": "BaseBdev3", 00:14:51.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.200 "is_configured": false, 00:14:51.200 "data_offset": 0, 00:14:51.200 "data_size": 0 00:14:51.200 } 00:14:51.200 ] 00:14:51.200 }' 00:14:51.200 12:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.200 12:06:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.460 12:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:51.460 12:06:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.460 12:06:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.460 [2024-11-19 12:06:54.716176] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:51.460 BaseBdev2 00:14:51.460 12:06:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.460 12:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:51.460 12:06:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:51.460 12:06:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:51.460 12:06:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:51.460 12:06:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:51.460 12:06:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:51.460 12:06:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:51.460 12:06:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.460 12:06:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.460 12:06:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.460 12:06:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:51.460 12:06:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.460 12:06:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.460 [ 00:14:51.460 { 00:14:51.460 "name": "BaseBdev2", 00:14:51.460 "aliases": [ 00:14:51.460 "ce34df4f-affd-4aea-a63b-8680de3e3d76" 00:14:51.460 ], 00:14:51.460 "product_name": "Malloc disk", 00:14:51.460 "block_size": 512, 00:14:51.460 "num_blocks": 65536, 00:14:51.460 "uuid": "ce34df4f-affd-4aea-a63b-8680de3e3d76", 00:14:51.460 "assigned_rate_limits": { 00:14:51.460 "rw_ios_per_sec": 0, 00:14:51.460 "rw_mbytes_per_sec": 0, 00:14:51.460 "r_mbytes_per_sec": 0, 00:14:51.460 "w_mbytes_per_sec": 0 00:14:51.460 }, 00:14:51.460 "claimed": true, 00:14:51.460 "claim_type": "exclusive_write", 00:14:51.460 "zoned": false, 00:14:51.460 "supported_io_types": { 00:14:51.460 "read": true, 00:14:51.460 "write": true, 00:14:51.460 "unmap": true, 00:14:51.460 "flush": true, 00:14:51.460 "reset": true, 00:14:51.460 "nvme_admin": false, 00:14:51.460 "nvme_io": false, 00:14:51.460 "nvme_io_md": false, 00:14:51.460 "write_zeroes": true, 00:14:51.460 "zcopy": true, 00:14:51.460 "get_zone_info": false, 00:14:51.460 "zone_management": false, 00:14:51.460 "zone_append": false, 00:14:51.460 "compare": false, 00:14:51.460 "compare_and_write": false, 00:14:51.460 "abort": true, 00:14:51.460 "seek_hole": false, 00:14:51.460 "seek_data": false, 00:14:51.460 "copy": true, 00:14:51.460 "nvme_iov_md": false 00:14:51.460 }, 00:14:51.460 "memory_domains": [ 00:14:51.460 { 00:14:51.460 "dma_device_id": "system", 00:14:51.460 "dma_device_type": 1 00:14:51.460 }, 00:14:51.460 { 00:14:51.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:51.460 "dma_device_type": 2 00:14:51.460 } 00:14:51.460 ], 00:14:51.460 "driver_specific": {} 00:14:51.460 } 00:14:51.460 ] 00:14:51.460 12:06:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.460 12:06:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:51.460 12:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:51.460 12:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:51.460 12:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:51.460 12:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:51.460 12:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:51.460 12:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:51.460 12:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:51.460 12:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:51.460 12:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.460 12:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.460 12:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.460 12:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.460 12:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.460 12:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:51.461 12:06:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.461 12:06:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.461 12:06:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.461 12:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.461 "name": "Existed_Raid", 00:14:51.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.461 "strip_size_kb": 64, 00:14:51.461 "state": "configuring", 00:14:51.461 "raid_level": "raid5f", 00:14:51.461 "superblock": false, 00:14:51.461 "num_base_bdevs": 3, 00:14:51.461 "num_base_bdevs_discovered": 2, 00:14:51.461 "num_base_bdevs_operational": 3, 00:14:51.461 "base_bdevs_list": [ 00:14:51.461 { 00:14:51.461 "name": "BaseBdev1", 00:14:51.461 "uuid": "0a41e2d9-0b4c-4811-8021-2a571d336d1a", 00:14:51.461 "is_configured": true, 00:14:51.461 "data_offset": 0, 00:14:51.461 "data_size": 65536 00:14:51.461 }, 00:14:51.461 { 00:14:51.461 "name": "BaseBdev2", 00:14:51.461 "uuid": "ce34df4f-affd-4aea-a63b-8680de3e3d76", 00:14:51.461 "is_configured": true, 00:14:51.461 "data_offset": 0, 00:14:51.461 "data_size": 65536 00:14:51.461 }, 00:14:51.461 { 00:14:51.461 "name": "BaseBdev3", 00:14:51.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.461 "is_configured": false, 00:14:51.461 "data_offset": 0, 00:14:51.461 "data_size": 0 00:14:51.461 } 00:14:51.461 ] 00:14:51.461 }' 00:14:51.461 12:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.461 12:06:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.030 12:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:52.030 12:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.030 12:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.030 [2024-11-19 12:06:55.198890] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:52.030 [2024-11-19 12:06:55.198967] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:52.030 [2024-11-19 12:06:55.198983] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:52.030 [2024-11-19 12:06:55.199281] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:52.030 [2024-11-19 12:06:55.204507] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:52.030 [2024-11-19 12:06:55.204530] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:52.030 [2024-11-19 12:06:55.204813] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:52.030 BaseBdev3 00:14:52.030 12:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.030 12:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:52.030 12:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:52.030 12:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:52.030 12:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:52.030 12:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:52.030 12:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:52.030 12:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:52.030 12:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.030 12:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.030 12:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.030 12:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:52.030 12:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.030 12:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.030 [ 00:14:52.030 { 00:14:52.030 "name": "BaseBdev3", 00:14:52.030 "aliases": [ 00:14:52.030 "6012daf2-7e8c-468c-a888-3604d31b40c8" 00:14:52.030 ], 00:14:52.030 "product_name": "Malloc disk", 00:14:52.030 "block_size": 512, 00:14:52.030 "num_blocks": 65536, 00:14:52.030 "uuid": "6012daf2-7e8c-468c-a888-3604d31b40c8", 00:14:52.030 "assigned_rate_limits": { 00:14:52.030 "rw_ios_per_sec": 0, 00:14:52.030 "rw_mbytes_per_sec": 0, 00:14:52.030 "r_mbytes_per_sec": 0, 00:14:52.030 "w_mbytes_per_sec": 0 00:14:52.030 }, 00:14:52.030 "claimed": true, 00:14:52.030 "claim_type": "exclusive_write", 00:14:52.030 "zoned": false, 00:14:52.030 "supported_io_types": { 00:14:52.030 "read": true, 00:14:52.030 "write": true, 00:14:52.030 "unmap": true, 00:14:52.030 "flush": true, 00:14:52.030 "reset": true, 00:14:52.030 "nvme_admin": false, 00:14:52.030 "nvme_io": false, 00:14:52.030 "nvme_io_md": false, 00:14:52.030 "write_zeroes": true, 00:14:52.030 "zcopy": true, 00:14:52.030 "get_zone_info": false, 00:14:52.030 "zone_management": false, 00:14:52.030 "zone_append": false, 00:14:52.030 "compare": false, 00:14:52.030 "compare_and_write": false, 00:14:52.030 "abort": true, 00:14:52.030 "seek_hole": false, 00:14:52.030 "seek_data": false, 00:14:52.030 "copy": true, 00:14:52.030 "nvme_iov_md": false 00:14:52.030 }, 00:14:52.030 "memory_domains": [ 00:14:52.030 { 00:14:52.030 "dma_device_id": "system", 00:14:52.030 "dma_device_type": 1 00:14:52.030 }, 00:14:52.030 { 00:14:52.030 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:52.030 "dma_device_type": 2 00:14:52.030 } 00:14:52.030 ], 00:14:52.030 "driver_specific": {} 00:14:52.030 } 00:14:52.030 ] 00:14:52.030 12:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.030 12:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:52.030 12:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:52.030 12:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:52.030 12:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:52.030 12:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:52.030 12:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:52.030 12:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:52.030 12:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:52.030 12:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:52.030 12:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.030 12:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.030 12:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.030 12:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.030 12:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.030 12:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:52.030 12:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.030 12:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.030 12:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.030 12:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.030 "name": "Existed_Raid", 00:14:52.030 "uuid": "679ea699-9273-48b3-a024-86cfb9e3a37e", 00:14:52.030 "strip_size_kb": 64, 00:14:52.030 "state": "online", 00:14:52.030 "raid_level": "raid5f", 00:14:52.030 "superblock": false, 00:14:52.030 "num_base_bdevs": 3, 00:14:52.030 "num_base_bdevs_discovered": 3, 00:14:52.030 "num_base_bdevs_operational": 3, 00:14:52.030 "base_bdevs_list": [ 00:14:52.030 { 00:14:52.030 "name": "BaseBdev1", 00:14:52.030 "uuid": "0a41e2d9-0b4c-4811-8021-2a571d336d1a", 00:14:52.030 "is_configured": true, 00:14:52.030 "data_offset": 0, 00:14:52.030 "data_size": 65536 00:14:52.030 }, 00:14:52.030 { 00:14:52.030 "name": "BaseBdev2", 00:14:52.030 "uuid": "ce34df4f-affd-4aea-a63b-8680de3e3d76", 00:14:52.030 "is_configured": true, 00:14:52.030 "data_offset": 0, 00:14:52.030 "data_size": 65536 00:14:52.030 }, 00:14:52.030 { 00:14:52.030 "name": "BaseBdev3", 00:14:52.030 "uuid": "6012daf2-7e8c-468c-a888-3604d31b40c8", 00:14:52.030 "is_configured": true, 00:14:52.030 "data_offset": 0, 00:14:52.030 "data_size": 65536 00:14:52.030 } 00:14:52.030 ] 00:14:52.030 }' 00:14:52.030 12:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.030 12:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.599 12:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:52.599 12:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:52.599 12:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:52.599 12:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:52.599 12:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:52.599 12:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:52.599 12:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:52.599 12:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:52.599 12:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.600 12:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.600 [2024-11-19 12:06:55.682443] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:52.600 12:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.600 12:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:52.600 "name": "Existed_Raid", 00:14:52.600 "aliases": [ 00:14:52.600 "679ea699-9273-48b3-a024-86cfb9e3a37e" 00:14:52.600 ], 00:14:52.600 "product_name": "Raid Volume", 00:14:52.600 "block_size": 512, 00:14:52.600 "num_blocks": 131072, 00:14:52.600 "uuid": "679ea699-9273-48b3-a024-86cfb9e3a37e", 00:14:52.600 "assigned_rate_limits": { 00:14:52.600 "rw_ios_per_sec": 0, 00:14:52.600 "rw_mbytes_per_sec": 0, 00:14:52.600 "r_mbytes_per_sec": 0, 00:14:52.600 "w_mbytes_per_sec": 0 00:14:52.600 }, 00:14:52.600 "claimed": false, 00:14:52.600 "zoned": false, 00:14:52.600 "supported_io_types": { 00:14:52.600 "read": true, 00:14:52.600 "write": true, 00:14:52.600 "unmap": false, 00:14:52.600 "flush": false, 00:14:52.600 "reset": true, 00:14:52.600 "nvme_admin": false, 00:14:52.600 "nvme_io": false, 00:14:52.600 "nvme_io_md": false, 00:14:52.600 "write_zeroes": true, 00:14:52.600 "zcopy": false, 00:14:52.600 "get_zone_info": false, 00:14:52.600 "zone_management": false, 00:14:52.600 "zone_append": false, 00:14:52.600 "compare": false, 00:14:52.600 "compare_and_write": false, 00:14:52.600 "abort": false, 00:14:52.600 "seek_hole": false, 00:14:52.600 "seek_data": false, 00:14:52.600 "copy": false, 00:14:52.600 "nvme_iov_md": false 00:14:52.600 }, 00:14:52.600 "driver_specific": { 00:14:52.600 "raid": { 00:14:52.600 "uuid": "679ea699-9273-48b3-a024-86cfb9e3a37e", 00:14:52.600 "strip_size_kb": 64, 00:14:52.600 "state": "online", 00:14:52.600 "raid_level": "raid5f", 00:14:52.600 "superblock": false, 00:14:52.600 "num_base_bdevs": 3, 00:14:52.600 "num_base_bdevs_discovered": 3, 00:14:52.600 "num_base_bdevs_operational": 3, 00:14:52.600 "base_bdevs_list": [ 00:14:52.600 { 00:14:52.600 "name": "BaseBdev1", 00:14:52.600 "uuid": "0a41e2d9-0b4c-4811-8021-2a571d336d1a", 00:14:52.600 "is_configured": true, 00:14:52.600 "data_offset": 0, 00:14:52.600 "data_size": 65536 00:14:52.600 }, 00:14:52.600 { 00:14:52.600 "name": "BaseBdev2", 00:14:52.600 "uuid": "ce34df4f-affd-4aea-a63b-8680de3e3d76", 00:14:52.600 "is_configured": true, 00:14:52.600 "data_offset": 0, 00:14:52.600 "data_size": 65536 00:14:52.600 }, 00:14:52.600 { 00:14:52.600 "name": "BaseBdev3", 00:14:52.600 "uuid": "6012daf2-7e8c-468c-a888-3604d31b40c8", 00:14:52.600 "is_configured": true, 00:14:52.600 "data_offset": 0, 00:14:52.600 "data_size": 65536 00:14:52.600 } 00:14:52.600 ] 00:14:52.600 } 00:14:52.600 } 00:14:52.600 }' 00:14:52.600 12:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:52.600 12:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:52.600 BaseBdev2 00:14:52.600 BaseBdev3' 00:14:52.600 12:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:52.600 12:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:52.600 12:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:52.600 12:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:52.600 12:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:52.600 12:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.600 12:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.600 12:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.600 12:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:52.600 12:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:52.600 12:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:52.600 12:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:52.600 12:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:52.600 12:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.600 12:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.600 12:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.600 12:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:52.600 12:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:52.600 12:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:52.600 12:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:52.600 12:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.600 12:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.600 12:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:52.600 12:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.600 12:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:52.600 12:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:52.600 12:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:52.600 12:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.600 12:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.600 [2024-11-19 12:06:55.890018] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:52.860 12:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.860 12:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:52.860 12:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:52.860 12:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:52.860 12:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:52.860 12:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:52.860 12:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:14:52.860 12:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:52.860 12:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:52.860 12:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:52.860 12:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:52.860 12:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:52.860 12:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.860 12:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.860 12:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.860 12:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.860 12:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.860 12:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:52.860 12:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.860 12:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.860 12:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.860 12:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.860 "name": "Existed_Raid", 00:14:52.860 "uuid": "679ea699-9273-48b3-a024-86cfb9e3a37e", 00:14:52.860 "strip_size_kb": 64, 00:14:52.860 "state": "online", 00:14:52.860 "raid_level": "raid5f", 00:14:52.860 "superblock": false, 00:14:52.860 "num_base_bdevs": 3, 00:14:52.860 "num_base_bdevs_discovered": 2, 00:14:52.860 "num_base_bdevs_operational": 2, 00:14:52.860 "base_bdevs_list": [ 00:14:52.860 { 00:14:52.860 "name": null, 00:14:52.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.860 "is_configured": false, 00:14:52.860 "data_offset": 0, 00:14:52.860 "data_size": 65536 00:14:52.860 }, 00:14:52.860 { 00:14:52.860 "name": "BaseBdev2", 00:14:52.860 "uuid": "ce34df4f-affd-4aea-a63b-8680de3e3d76", 00:14:52.860 "is_configured": true, 00:14:52.860 "data_offset": 0, 00:14:52.860 "data_size": 65536 00:14:52.860 }, 00:14:52.860 { 00:14:52.860 "name": "BaseBdev3", 00:14:52.860 "uuid": "6012daf2-7e8c-468c-a888-3604d31b40c8", 00:14:52.860 "is_configured": true, 00:14:52.860 "data_offset": 0, 00:14:52.860 "data_size": 65536 00:14:52.860 } 00:14:52.860 ] 00:14:52.860 }' 00:14:52.860 12:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.860 12:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.120 12:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:53.120 12:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:53.120 12:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.120 12:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.120 12:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:53.120 12:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.120 12:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.120 12:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:53.120 12:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:53.120 12:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:53.120 12:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.120 12:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.120 [2024-11-19 12:06:56.479822] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:53.120 [2024-11-19 12:06:56.479925] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:53.379 [2024-11-19 12:06:56.570636] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:53.379 12:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.379 12:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:53.379 12:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:53.379 12:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.379 12:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.379 12:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.379 12:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:53.379 12:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.379 12:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:53.379 12:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:53.379 12:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:53.379 12:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.379 12:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.379 [2024-11-19 12:06:56.626574] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:53.379 [2024-11-19 12:06:56.626634] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:53.379 12:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.379 12:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:53.380 12:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:53.380 12:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.380 12:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:53.380 12:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.380 12:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.380 12:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.639 12:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:53.639 12:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:53.639 12:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:14:53.639 12:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:53.639 12:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:53.639 12:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:53.639 12:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.639 12:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.639 BaseBdev2 00:14:53.639 12:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.639 12:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:53.639 12:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:53.639 12:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:53.639 12:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:53.639 12:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:53.639 12:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:53.640 12:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:53.640 12:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.640 12:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.640 12:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.640 12:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:53.640 12:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.640 12:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.640 [ 00:14:53.640 { 00:14:53.640 "name": "BaseBdev2", 00:14:53.640 "aliases": [ 00:14:53.640 "db1d9d4d-9744-4d7f-bdbf-264cce92ffe5" 00:14:53.640 ], 00:14:53.640 "product_name": "Malloc disk", 00:14:53.640 "block_size": 512, 00:14:53.640 "num_blocks": 65536, 00:14:53.640 "uuid": "db1d9d4d-9744-4d7f-bdbf-264cce92ffe5", 00:14:53.640 "assigned_rate_limits": { 00:14:53.640 "rw_ios_per_sec": 0, 00:14:53.640 "rw_mbytes_per_sec": 0, 00:14:53.640 "r_mbytes_per_sec": 0, 00:14:53.640 "w_mbytes_per_sec": 0 00:14:53.640 }, 00:14:53.640 "claimed": false, 00:14:53.640 "zoned": false, 00:14:53.640 "supported_io_types": { 00:14:53.640 "read": true, 00:14:53.640 "write": true, 00:14:53.640 "unmap": true, 00:14:53.640 "flush": true, 00:14:53.640 "reset": true, 00:14:53.640 "nvme_admin": false, 00:14:53.640 "nvme_io": false, 00:14:53.640 "nvme_io_md": false, 00:14:53.640 "write_zeroes": true, 00:14:53.640 "zcopy": true, 00:14:53.640 "get_zone_info": false, 00:14:53.640 "zone_management": false, 00:14:53.640 "zone_append": false, 00:14:53.640 "compare": false, 00:14:53.640 "compare_and_write": false, 00:14:53.640 "abort": true, 00:14:53.640 "seek_hole": false, 00:14:53.640 "seek_data": false, 00:14:53.640 "copy": true, 00:14:53.640 "nvme_iov_md": false 00:14:53.640 }, 00:14:53.640 "memory_domains": [ 00:14:53.640 { 00:14:53.640 "dma_device_id": "system", 00:14:53.640 "dma_device_type": 1 00:14:53.640 }, 00:14:53.640 { 00:14:53.640 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.640 "dma_device_type": 2 00:14:53.640 } 00:14:53.640 ], 00:14:53.640 "driver_specific": {} 00:14:53.640 } 00:14:53.640 ] 00:14:53.640 12:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.640 12:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:53.640 12:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:53.640 12:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:53.640 12:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:53.640 12:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.640 12:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.640 BaseBdev3 00:14:53.640 12:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.640 12:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:53.640 12:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:53.640 12:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:53.640 12:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:53.640 12:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:53.640 12:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:53.640 12:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:53.640 12:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.640 12:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.640 12:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.640 12:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:53.640 12:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.640 12:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.640 [ 00:14:53.640 { 00:14:53.640 "name": "BaseBdev3", 00:14:53.640 "aliases": [ 00:14:53.640 "77c12788-2b35-45fb-9251-0b381caf79da" 00:14:53.640 ], 00:14:53.640 "product_name": "Malloc disk", 00:14:53.640 "block_size": 512, 00:14:53.640 "num_blocks": 65536, 00:14:53.640 "uuid": "77c12788-2b35-45fb-9251-0b381caf79da", 00:14:53.640 "assigned_rate_limits": { 00:14:53.640 "rw_ios_per_sec": 0, 00:14:53.640 "rw_mbytes_per_sec": 0, 00:14:53.640 "r_mbytes_per_sec": 0, 00:14:53.640 "w_mbytes_per_sec": 0 00:14:53.640 }, 00:14:53.640 "claimed": false, 00:14:53.640 "zoned": false, 00:14:53.640 "supported_io_types": { 00:14:53.640 "read": true, 00:14:53.640 "write": true, 00:14:53.640 "unmap": true, 00:14:53.640 "flush": true, 00:14:53.640 "reset": true, 00:14:53.640 "nvme_admin": false, 00:14:53.640 "nvme_io": false, 00:14:53.640 "nvme_io_md": false, 00:14:53.640 "write_zeroes": true, 00:14:53.640 "zcopy": true, 00:14:53.640 "get_zone_info": false, 00:14:53.640 "zone_management": false, 00:14:53.640 "zone_append": false, 00:14:53.640 "compare": false, 00:14:53.640 "compare_and_write": false, 00:14:53.640 "abort": true, 00:14:53.640 "seek_hole": false, 00:14:53.640 "seek_data": false, 00:14:53.640 "copy": true, 00:14:53.640 "nvme_iov_md": false 00:14:53.640 }, 00:14:53.640 "memory_domains": [ 00:14:53.640 { 00:14:53.640 "dma_device_id": "system", 00:14:53.640 "dma_device_type": 1 00:14:53.640 }, 00:14:53.640 { 00:14:53.640 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.640 "dma_device_type": 2 00:14:53.640 } 00:14:53.640 ], 00:14:53.640 "driver_specific": {} 00:14:53.640 } 00:14:53.640 ] 00:14:53.640 12:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.640 12:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:53.640 12:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:53.640 12:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:53.640 12:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:53.640 12:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.640 12:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.640 [2024-11-19 12:06:56.926898] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:53.640 [2024-11-19 12:06:56.927024] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:53.640 [2024-11-19 12:06:56.927072] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:53.640 [2024-11-19 12:06:56.928795] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:53.640 12:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.640 12:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:53.640 12:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:53.640 12:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:53.640 12:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:53.640 12:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:53.640 12:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:53.640 12:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.640 12:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.640 12:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.640 12:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.640 12:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.640 12:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:53.640 12:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.640 12:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.640 12:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.640 12:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.640 "name": "Existed_Raid", 00:14:53.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.640 "strip_size_kb": 64, 00:14:53.640 "state": "configuring", 00:14:53.640 "raid_level": "raid5f", 00:14:53.640 "superblock": false, 00:14:53.640 "num_base_bdevs": 3, 00:14:53.640 "num_base_bdevs_discovered": 2, 00:14:53.640 "num_base_bdevs_operational": 3, 00:14:53.640 "base_bdevs_list": [ 00:14:53.640 { 00:14:53.640 "name": "BaseBdev1", 00:14:53.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.640 "is_configured": false, 00:14:53.640 "data_offset": 0, 00:14:53.640 "data_size": 0 00:14:53.640 }, 00:14:53.640 { 00:14:53.640 "name": "BaseBdev2", 00:14:53.640 "uuid": "db1d9d4d-9744-4d7f-bdbf-264cce92ffe5", 00:14:53.640 "is_configured": true, 00:14:53.641 "data_offset": 0, 00:14:53.641 "data_size": 65536 00:14:53.641 }, 00:14:53.641 { 00:14:53.641 "name": "BaseBdev3", 00:14:53.641 "uuid": "77c12788-2b35-45fb-9251-0b381caf79da", 00:14:53.641 "is_configured": true, 00:14:53.641 "data_offset": 0, 00:14:53.641 "data_size": 65536 00:14:53.641 } 00:14:53.641 ] 00:14:53.641 }' 00:14:53.641 12:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.641 12:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.209 12:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:54.209 12:06:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.209 12:06:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.209 [2024-11-19 12:06:57.350159] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:54.209 12:06:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.209 12:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:54.209 12:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:54.209 12:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:54.209 12:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:54.209 12:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:54.209 12:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:54.209 12:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.209 12:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.209 12:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.209 12:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.209 12:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.209 12:06:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.209 12:06:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.209 12:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:54.209 12:06:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.209 12:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.209 "name": "Existed_Raid", 00:14:54.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.209 "strip_size_kb": 64, 00:14:54.209 "state": "configuring", 00:14:54.209 "raid_level": "raid5f", 00:14:54.209 "superblock": false, 00:14:54.209 "num_base_bdevs": 3, 00:14:54.209 "num_base_bdevs_discovered": 1, 00:14:54.209 "num_base_bdevs_operational": 3, 00:14:54.209 "base_bdevs_list": [ 00:14:54.209 { 00:14:54.209 "name": "BaseBdev1", 00:14:54.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.209 "is_configured": false, 00:14:54.209 "data_offset": 0, 00:14:54.209 "data_size": 0 00:14:54.209 }, 00:14:54.209 { 00:14:54.209 "name": null, 00:14:54.209 "uuid": "db1d9d4d-9744-4d7f-bdbf-264cce92ffe5", 00:14:54.209 "is_configured": false, 00:14:54.209 "data_offset": 0, 00:14:54.209 "data_size": 65536 00:14:54.209 }, 00:14:54.209 { 00:14:54.209 "name": "BaseBdev3", 00:14:54.209 "uuid": "77c12788-2b35-45fb-9251-0b381caf79da", 00:14:54.209 "is_configured": true, 00:14:54.209 "data_offset": 0, 00:14:54.209 "data_size": 65536 00:14:54.209 } 00:14:54.209 ] 00:14:54.209 }' 00:14:54.209 12:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.209 12:06:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.469 12:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.469 12:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:54.469 12:06:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.469 12:06:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.469 12:06:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.469 12:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:54.469 12:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:54.469 12:06:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.469 12:06:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.469 [2024-11-19 12:06:57.824600] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:54.469 BaseBdev1 00:14:54.469 12:06:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.469 12:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:54.469 12:06:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:54.469 12:06:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:54.469 12:06:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:54.469 12:06:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:54.469 12:06:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:54.469 12:06:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:54.469 12:06:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.469 12:06:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.469 12:06:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.469 12:06:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:54.469 12:06:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.469 12:06:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.729 [ 00:14:54.729 { 00:14:54.729 "name": "BaseBdev1", 00:14:54.729 "aliases": [ 00:14:54.729 "192491fc-949c-4c5d-8e4c-d0667f5c0b16" 00:14:54.729 ], 00:14:54.729 "product_name": "Malloc disk", 00:14:54.729 "block_size": 512, 00:14:54.729 "num_blocks": 65536, 00:14:54.729 "uuid": "192491fc-949c-4c5d-8e4c-d0667f5c0b16", 00:14:54.729 "assigned_rate_limits": { 00:14:54.729 "rw_ios_per_sec": 0, 00:14:54.729 "rw_mbytes_per_sec": 0, 00:14:54.729 "r_mbytes_per_sec": 0, 00:14:54.729 "w_mbytes_per_sec": 0 00:14:54.729 }, 00:14:54.729 "claimed": true, 00:14:54.729 "claim_type": "exclusive_write", 00:14:54.729 "zoned": false, 00:14:54.729 "supported_io_types": { 00:14:54.729 "read": true, 00:14:54.729 "write": true, 00:14:54.729 "unmap": true, 00:14:54.729 "flush": true, 00:14:54.729 "reset": true, 00:14:54.729 "nvme_admin": false, 00:14:54.729 "nvme_io": false, 00:14:54.729 "nvme_io_md": false, 00:14:54.729 "write_zeroes": true, 00:14:54.729 "zcopy": true, 00:14:54.729 "get_zone_info": false, 00:14:54.729 "zone_management": false, 00:14:54.729 "zone_append": false, 00:14:54.729 "compare": false, 00:14:54.729 "compare_and_write": false, 00:14:54.729 "abort": true, 00:14:54.729 "seek_hole": false, 00:14:54.729 "seek_data": false, 00:14:54.729 "copy": true, 00:14:54.729 "nvme_iov_md": false 00:14:54.729 }, 00:14:54.729 "memory_domains": [ 00:14:54.729 { 00:14:54.729 "dma_device_id": "system", 00:14:54.729 "dma_device_type": 1 00:14:54.729 }, 00:14:54.729 { 00:14:54.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:54.729 "dma_device_type": 2 00:14:54.729 } 00:14:54.729 ], 00:14:54.729 "driver_specific": {} 00:14:54.729 } 00:14:54.729 ] 00:14:54.729 12:06:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.729 12:06:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:54.729 12:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:54.729 12:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:54.729 12:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:54.729 12:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:54.729 12:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:54.729 12:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:54.729 12:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.729 12:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.729 12:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.729 12:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.729 12:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:54.729 12:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.729 12:06:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.729 12:06:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.729 12:06:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.729 12:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.729 "name": "Existed_Raid", 00:14:54.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.729 "strip_size_kb": 64, 00:14:54.729 "state": "configuring", 00:14:54.729 "raid_level": "raid5f", 00:14:54.729 "superblock": false, 00:14:54.729 "num_base_bdevs": 3, 00:14:54.729 "num_base_bdevs_discovered": 2, 00:14:54.729 "num_base_bdevs_operational": 3, 00:14:54.729 "base_bdevs_list": [ 00:14:54.729 { 00:14:54.729 "name": "BaseBdev1", 00:14:54.729 "uuid": "192491fc-949c-4c5d-8e4c-d0667f5c0b16", 00:14:54.729 "is_configured": true, 00:14:54.729 "data_offset": 0, 00:14:54.729 "data_size": 65536 00:14:54.729 }, 00:14:54.729 { 00:14:54.729 "name": null, 00:14:54.729 "uuid": "db1d9d4d-9744-4d7f-bdbf-264cce92ffe5", 00:14:54.729 "is_configured": false, 00:14:54.729 "data_offset": 0, 00:14:54.729 "data_size": 65536 00:14:54.729 }, 00:14:54.729 { 00:14:54.729 "name": "BaseBdev3", 00:14:54.729 "uuid": "77c12788-2b35-45fb-9251-0b381caf79da", 00:14:54.729 "is_configured": true, 00:14:54.729 "data_offset": 0, 00:14:54.729 "data_size": 65536 00:14:54.729 } 00:14:54.729 ] 00:14:54.729 }' 00:14:54.729 12:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.729 12:06:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.989 12:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.989 12:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.989 12:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:54.989 12:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.989 12:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.989 12:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:54.989 12:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:54.989 12:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.989 12:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.989 [2024-11-19 12:06:58.271852] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:54.989 12:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.989 12:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:54.989 12:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:54.989 12:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:54.989 12:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:54.989 12:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:54.989 12:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:54.989 12:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.989 12:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.989 12:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.989 12:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.989 12:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.989 12:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:54.989 12:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.989 12:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.989 12:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.989 12:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.989 "name": "Existed_Raid", 00:14:54.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.989 "strip_size_kb": 64, 00:14:54.989 "state": "configuring", 00:14:54.989 "raid_level": "raid5f", 00:14:54.989 "superblock": false, 00:14:54.989 "num_base_bdevs": 3, 00:14:54.989 "num_base_bdevs_discovered": 1, 00:14:54.989 "num_base_bdevs_operational": 3, 00:14:54.989 "base_bdevs_list": [ 00:14:54.989 { 00:14:54.989 "name": "BaseBdev1", 00:14:54.989 "uuid": "192491fc-949c-4c5d-8e4c-d0667f5c0b16", 00:14:54.989 "is_configured": true, 00:14:54.989 "data_offset": 0, 00:14:54.989 "data_size": 65536 00:14:54.989 }, 00:14:54.989 { 00:14:54.989 "name": null, 00:14:54.989 "uuid": "db1d9d4d-9744-4d7f-bdbf-264cce92ffe5", 00:14:54.989 "is_configured": false, 00:14:54.989 "data_offset": 0, 00:14:54.989 "data_size": 65536 00:14:54.989 }, 00:14:54.989 { 00:14:54.989 "name": null, 00:14:54.989 "uuid": "77c12788-2b35-45fb-9251-0b381caf79da", 00:14:54.989 "is_configured": false, 00:14:54.989 "data_offset": 0, 00:14:54.989 "data_size": 65536 00:14:54.989 } 00:14:54.989 ] 00:14:54.989 }' 00:14:54.989 12:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.989 12:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.559 12:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:55.559 12:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.559 12:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.559 12:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.559 12:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.559 12:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:55.559 12:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:55.559 12:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.559 12:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.559 [2024-11-19 12:06:58.727145] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:55.559 12:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.559 12:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:55.559 12:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:55.559 12:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:55.559 12:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:55.559 12:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:55.559 12:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:55.559 12:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.559 12:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.559 12:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.559 12:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.559 12:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.559 12:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:55.559 12:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.559 12:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.559 12:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.559 12:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.559 "name": "Existed_Raid", 00:14:55.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.559 "strip_size_kb": 64, 00:14:55.559 "state": "configuring", 00:14:55.559 "raid_level": "raid5f", 00:14:55.559 "superblock": false, 00:14:55.559 "num_base_bdevs": 3, 00:14:55.559 "num_base_bdevs_discovered": 2, 00:14:55.559 "num_base_bdevs_operational": 3, 00:14:55.559 "base_bdevs_list": [ 00:14:55.559 { 00:14:55.559 "name": "BaseBdev1", 00:14:55.559 "uuid": "192491fc-949c-4c5d-8e4c-d0667f5c0b16", 00:14:55.559 "is_configured": true, 00:14:55.559 "data_offset": 0, 00:14:55.559 "data_size": 65536 00:14:55.559 }, 00:14:55.559 { 00:14:55.559 "name": null, 00:14:55.559 "uuid": "db1d9d4d-9744-4d7f-bdbf-264cce92ffe5", 00:14:55.559 "is_configured": false, 00:14:55.559 "data_offset": 0, 00:14:55.559 "data_size": 65536 00:14:55.559 }, 00:14:55.559 { 00:14:55.559 "name": "BaseBdev3", 00:14:55.559 "uuid": "77c12788-2b35-45fb-9251-0b381caf79da", 00:14:55.559 "is_configured": true, 00:14:55.559 "data_offset": 0, 00:14:55.559 "data_size": 65536 00:14:55.559 } 00:14:55.559 ] 00:14:55.559 }' 00:14:55.559 12:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.559 12:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.818 12:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.818 12:06:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.818 12:06:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.818 12:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:55.818 12:06:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.076 12:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:56.076 12:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:56.076 12:06:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.076 12:06:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.076 [2024-11-19 12:06:59.210301] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:56.076 12:06:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.076 12:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:56.076 12:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:56.076 12:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:56.076 12:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:56.076 12:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:56.076 12:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:56.076 12:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.076 12:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.076 12:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.076 12:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.076 12:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.076 12:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:56.076 12:06:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.076 12:06:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.076 12:06:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.076 12:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.076 "name": "Existed_Raid", 00:14:56.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.077 "strip_size_kb": 64, 00:14:56.077 "state": "configuring", 00:14:56.077 "raid_level": "raid5f", 00:14:56.077 "superblock": false, 00:14:56.077 "num_base_bdevs": 3, 00:14:56.077 "num_base_bdevs_discovered": 1, 00:14:56.077 "num_base_bdevs_operational": 3, 00:14:56.077 "base_bdevs_list": [ 00:14:56.077 { 00:14:56.077 "name": null, 00:14:56.077 "uuid": "192491fc-949c-4c5d-8e4c-d0667f5c0b16", 00:14:56.077 "is_configured": false, 00:14:56.077 "data_offset": 0, 00:14:56.077 "data_size": 65536 00:14:56.077 }, 00:14:56.077 { 00:14:56.077 "name": null, 00:14:56.077 "uuid": "db1d9d4d-9744-4d7f-bdbf-264cce92ffe5", 00:14:56.077 "is_configured": false, 00:14:56.077 "data_offset": 0, 00:14:56.077 "data_size": 65536 00:14:56.077 }, 00:14:56.077 { 00:14:56.077 "name": "BaseBdev3", 00:14:56.077 "uuid": "77c12788-2b35-45fb-9251-0b381caf79da", 00:14:56.077 "is_configured": true, 00:14:56.077 "data_offset": 0, 00:14:56.077 "data_size": 65536 00:14:56.077 } 00:14:56.077 ] 00:14:56.077 }' 00:14:56.077 12:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.077 12:06:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.645 12:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.645 12:06:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.645 12:06:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.645 12:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:56.645 12:06:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.645 12:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:56.645 12:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:56.645 12:06:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.645 12:06:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.645 [2024-11-19 12:06:59.763072] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:56.645 12:06:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.645 12:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:56.645 12:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:56.645 12:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:56.645 12:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:56.645 12:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:56.645 12:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:56.645 12:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.645 12:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.645 12:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.645 12:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.645 12:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.645 12:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:56.645 12:06:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.645 12:06:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.645 12:06:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.645 12:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.645 "name": "Existed_Raid", 00:14:56.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.645 "strip_size_kb": 64, 00:14:56.645 "state": "configuring", 00:14:56.645 "raid_level": "raid5f", 00:14:56.645 "superblock": false, 00:14:56.645 "num_base_bdevs": 3, 00:14:56.645 "num_base_bdevs_discovered": 2, 00:14:56.645 "num_base_bdevs_operational": 3, 00:14:56.645 "base_bdevs_list": [ 00:14:56.645 { 00:14:56.645 "name": null, 00:14:56.645 "uuid": "192491fc-949c-4c5d-8e4c-d0667f5c0b16", 00:14:56.645 "is_configured": false, 00:14:56.645 "data_offset": 0, 00:14:56.645 "data_size": 65536 00:14:56.645 }, 00:14:56.645 { 00:14:56.645 "name": "BaseBdev2", 00:14:56.645 "uuid": "db1d9d4d-9744-4d7f-bdbf-264cce92ffe5", 00:14:56.645 "is_configured": true, 00:14:56.645 "data_offset": 0, 00:14:56.645 "data_size": 65536 00:14:56.645 }, 00:14:56.645 { 00:14:56.645 "name": "BaseBdev3", 00:14:56.645 "uuid": "77c12788-2b35-45fb-9251-0b381caf79da", 00:14:56.645 "is_configured": true, 00:14:56.645 "data_offset": 0, 00:14:56.645 "data_size": 65536 00:14:56.645 } 00:14:56.645 ] 00:14:56.645 }' 00:14:56.645 12:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.646 12:06:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.933 12:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:56.933 12:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.933 12:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.933 12:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.933 12:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.933 12:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:56.933 12:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.933 12:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.933 12:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.933 12:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:56.933 12:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.933 12:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 192491fc-949c-4c5d-8e4c-d0667f5c0b16 00:14:56.933 12:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.933 12:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.193 [2024-11-19 12:07:00.296257] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:57.193 [2024-11-19 12:07:00.296304] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:57.193 [2024-11-19 12:07:00.296313] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:57.193 [2024-11-19 12:07:00.296562] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:57.193 [2024-11-19 12:07:00.301773] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:57.193 [2024-11-19 12:07:00.301793] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:57.193 [2024-11-19 12:07:00.302065] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:57.193 NewBaseBdev 00:14:57.193 12:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.193 12:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:57.193 12:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:57.193 12:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:57.193 12:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:57.194 12:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:57.194 12:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:57.194 12:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:57.194 12:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.194 12:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.194 12:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.194 12:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:57.194 12:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.194 12:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.194 [ 00:14:57.194 { 00:14:57.194 "name": "NewBaseBdev", 00:14:57.194 "aliases": [ 00:14:57.194 "192491fc-949c-4c5d-8e4c-d0667f5c0b16" 00:14:57.194 ], 00:14:57.194 "product_name": "Malloc disk", 00:14:57.194 "block_size": 512, 00:14:57.194 "num_blocks": 65536, 00:14:57.194 "uuid": "192491fc-949c-4c5d-8e4c-d0667f5c0b16", 00:14:57.194 "assigned_rate_limits": { 00:14:57.194 "rw_ios_per_sec": 0, 00:14:57.194 "rw_mbytes_per_sec": 0, 00:14:57.194 "r_mbytes_per_sec": 0, 00:14:57.194 "w_mbytes_per_sec": 0 00:14:57.194 }, 00:14:57.194 "claimed": true, 00:14:57.194 "claim_type": "exclusive_write", 00:14:57.194 "zoned": false, 00:14:57.194 "supported_io_types": { 00:14:57.194 "read": true, 00:14:57.194 "write": true, 00:14:57.194 "unmap": true, 00:14:57.194 "flush": true, 00:14:57.194 "reset": true, 00:14:57.194 "nvme_admin": false, 00:14:57.194 "nvme_io": false, 00:14:57.194 "nvme_io_md": false, 00:14:57.194 "write_zeroes": true, 00:14:57.194 "zcopy": true, 00:14:57.194 "get_zone_info": false, 00:14:57.194 "zone_management": false, 00:14:57.194 "zone_append": false, 00:14:57.194 "compare": false, 00:14:57.194 "compare_and_write": false, 00:14:57.194 "abort": true, 00:14:57.194 "seek_hole": false, 00:14:57.194 "seek_data": false, 00:14:57.194 "copy": true, 00:14:57.194 "nvme_iov_md": false 00:14:57.194 }, 00:14:57.194 "memory_domains": [ 00:14:57.194 { 00:14:57.194 "dma_device_id": "system", 00:14:57.194 "dma_device_type": 1 00:14:57.194 }, 00:14:57.194 { 00:14:57.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.194 "dma_device_type": 2 00:14:57.194 } 00:14:57.194 ], 00:14:57.194 "driver_specific": {} 00:14:57.194 } 00:14:57.194 ] 00:14:57.194 12:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.194 12:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:57.194 12:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:57.194 12:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:57.194 12:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:57.194 12:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:57.194 12:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:57.194 12:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:57.194 12:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.194 12:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.194 12:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.194 12:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.194 12:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:57.194 12:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.194 12:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.194 12:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.194 12:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.194 12:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.194 "name": "Existed_Raid", 00:14:57.194 "uuid": "1fcb96e5-b644-4948-ae8b-7229be57c4b6", 00:14:57.194 "strip_size_kb": 64, 00:14:57.194 "state": "online", 00:14:57.194 "raid_level": "raid5f", 00:14:57.194 "superblock": false, 00:14:57.194 "num_base_bdevs": 3, 00:14:57.194 "num_base_bdevs_discovered": 3, 00:14:57.194 "num_base_bdevs_operational": 3, 00:14:57.194 "base_bdevs_list": [ 00:14:57.194 { 00:14:57.194 "name": "NewBaseBdev", 00:14:57.194 "uuid": "192491fc-949c-4c5d-8e4c-d0667f5c0b16", 00:14:57.194 "is_configured": true, 00:14:57.194 "data_offset": 0, 00:14:57.194 "data_size": 65536 00:14:57.194 }, 00:14:57.194 { 00:14:57.194 "name": "BaseBdev2", 00:14:57.194 "uuid": "db1d9d4d-9744-4d7f-bdbf-264cce92ffe5", 00:14:57.194 "is_configured": true, 00:14:57.194 "data_offset": 0, 00:14:57.194 "data_size": 65536 00:14:57.194 }, 00:14:57.194 { 00:14:57.194 "name": "BaseBdev3", 00:14:57.194 "uuid": "77c12788-2b35-45fb-9251-0b381caf79da", 00:14:57.194 "is_configured": true, 00:14:57.194 "data_offset": 0, 00:14:57.194 "data_size": 65536 00:14:57.194 } 00:14:57.194 ] 00:14:57.194 }' 00:14:57.194 12:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.194 12:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.454 12:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:57.454 12:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:57.454 12:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:57.454 12:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:57.454 12:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:57.454 12:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:57.454 12:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:57.454 12:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.454 12:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.454 12:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:57.454 [2024-11-19 12:07:00.771700] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:57.454 12:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.454 12:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:57.454 "name": "Existed_Raid", 00:14:57.454 "aliases": [ 00:14:57.454 "1fcb96e5-b644-4948-ae8b-7229be57c4b6" 00:14:57.454 ], 00:14:57.454 "product_name": "Raid Volume", 00:14:57.454 "block_size": 512, 00:14:57.454 "num_blocks": 131072, 00:14:57.454 "uuid": "1fcb96e5-b644-4948-ae8b-7229be57c4b6", 00:14:57.454 "assigned_rate_limits": { 00:14:57.454 "rw_ios_per_sec": 0, 00:14:57.454 "rw_mbytes_per_sec": 0, 00:14:57.454 "r_mbytes_per_sec": 0, 00:14:57.454 "w_mbytes_per_sec": 0 00:14:57.454 }, 00:14:57.454 "claimed": false, 00:14:57.454 "zoned": false, 00:14:57.454 "supported_io_types": { 00:14:57.454 "read": true, 00:14:57.454 "write": true, 00:14:57.454 "unmap": false, 00:14:57.454 "flush": false, 00:14:57.454 "reset": true, 00:14:57.454 "nvme_admin": false, 00:14:57.454 "nvme_io": false, 00:14:57.454 "nvme_io_md": false, 00:14:57.454 "write_zeroes": true, 00:14:57.454 "zcopy": false, 00:14:57.454 "get_zone_info": false, 00:14:57.454 "zone_management": false, 00:14:57.454 "zone_append": false, 00:14:57.454 "compare": false, 00:14:57.454 "compare_and_write": false, 00:14:57.454 "abort": false, 00:14:57.454 "seek_hole": false, 00:14:57.454 "seek_data": false, 00:14:57.454 "copy": false, 00:14:57.454 "nvme_iov_md": false 00:14:57.454 }, 00:14:57.454 "driver_specific": { 00:14:57.454 "raid": { 00:14:57.454 "uuid": "1fcb96e5-b644-4948-ae8b-7229be57c4b6", 00:14:57.454 "strip_size_kb": 64, 00:14:57.454 "state": "online", 00:14:57.454 "raid_level": "raid5f", 00:14:57.454 "superblock": false, 00:14:57.454 "num_base_bdevs": 3, 00:14:57.454 "num_base_bdevs_discovered": 3, 00:14:57.454 "num_base_bdevs_operational": 3, 00:14:57.454 "base_bdevs_list": [ 00:14:57.454 { 00:14:57.454 "name": "NewBaseBdev", 00:14:57.454 "uuid": "192491fc-949c-4c5d-8e4c-d0667f5c0b16", 00:14:57.454 "is_configured": true, 00:14:57.454 "data_offset": 0, 00:14:57.454 "data_size": 65536 00:14:57.454 }, 00:14:57.454 { 00:14:57.454 "name": "BaseBdev2", 00:14:57.454 "uuid": "db1d9d4d-9744-4d7f-bdbf-264cce92ffe5", 00:14:57.454 "is_configured": true, 00:14:57.454 "data_offset": 0, 00:14:57.454 "data_size": 65536 00:14:57.454 }, 00:14:57.454 { 00:14:57.454 "name": "BaseBdev3", 00:14:57.454 "uuid": "77c12788-2b35-45fb-9251-0b381caf79da", 00:14:57.454 "is_configured": true, 00:14:57.454 "data_offset": 0, 00:14:57.454 "data_size": 65536 00:14:57.454 } 00:14:57.454 ] 00:14:57.454 } 00:14:57.455 } 00:14:57.455 }' 00:14:57.455 12:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:57.715 12:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:57.715 BaseBdev2 00:14:57.715 BaseBdev3' 00:14:57.715 12:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:57.715 12:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:57.715 12:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:57.715 12:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:57.715 12:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:57.715 12:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.715 12:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.715 12:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.715 12:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:57.715 12:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:57.715 12:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:57.715 12:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:57.715 12:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:57.715 12:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.715 12:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.715 12:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.715 12:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:57.715 12:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:57.715 12:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:57.715 12:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:57.715 12:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.715 12:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.715 12:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:57.715 12:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.715 12:07:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:57.715 12:07:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:57.715 12:07:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:57.715 12:07:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.715 12:07:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.715 [2024-11-19 12:07:01.023123] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:57.715 [2024-11-19 12:07:01.023156] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:57.715 [2024-11-19 12:07:01.023228] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:57.715 [2024-11-19 12:07:01.023513] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:57.715 [2024-11-19 12:07:01.023531] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:57.715 12:07:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.715 12:07:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 79836 00:14:57.715 12:07:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 79836 ']' 00:14:57.715 12:07:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 79836 00:14:57.715 12:07:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:14:57.715 12:07:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:57.715 12:07:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79836 00:14:57.715 12:07:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:57.715 12:07:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:57.715 12:07:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79836' 00:14:57.715 killing process with pid 79836 00:14:57.715 12:07:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 79836 00:14:57.715 [2024-11-19 12:07:01.069987] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:57.715 12:07:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 79836 00:14:58.283 [2024-11-19 12:07:01.351101] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:59.222 ************************************ 00:14:59.222 END TEST raid5f_state_function_test 00:14:59.222 ************************************ 00:14:59.222 12:07:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:59.222 00:14:59.222 real 0m9.943s 00:14:59.222 user 0m15.764s 00:14:59.222 sys 0m1.735s 00:14:59.222 12:07:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:59.222 12:07:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.222 12:07:02 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:14:59.222 12:07:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:59.222 12:07:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:59.222 12:07:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:59.222 ************************************ 00:14:59.222 START TEST raid5f_state_function_test_sb 00:14:59.222 ************************************ 00:14:59.222 12:07:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:14:59.222 12:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:59.222 12:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:59.222 12:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:59.222 12:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:59.222 12:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:59.222 12:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:59.222 12:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:59.222 12:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:59.222 12:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:59.222 12:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:59.222 12:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:59.222 12:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:59.222 12:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:59.222 12:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:59.222 12:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:59.222 12:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:59.222 12:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:59.222 12:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:59.222 12:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:59.222 12:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:59.222 12:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:59.222 12:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:59.222 12:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:59.222 12:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:59.222 12:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:59.222 12:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:59.222 Process raid pid: 80452 00:14:59.222 12:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80452 00:14:59.222 12:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:59.222 12:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80452' 00:14:59.223 12:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80452 00:14:59.223 12:07:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 80452 ']' 00:14:59.223 12:07:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:59.223 12:07:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:59.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:59.223 12:07:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:59.223 12:07:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:59.223 12:07:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.223 [2024-11-19 12:07:02.562951] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:14:59.223 [2024-11-19 12:07:02.563201] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:59.482 [2024-11-19 12:07:02.737619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:59.482 [2024-11-19 12:07:02.853337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:59.741 [2024-11-19 12:07:03.056179] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:59.741 [2024-11-19 12:07:03.056260] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:00.309 12:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:00.309 12:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:00.309 12:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:00.309 12:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.309 12:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.309 [2024-11-19 12:07:03.385208] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:00.310 [2024-11-19 12:07:03.385256] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:00.310 [2024-11-19 12:07:03.385266] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:00.310 [2024-11-19 12:07:03.385276] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:00.310 [2024-11-19 12:07:03.385282] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:00.310 [2024-11-19 12:07:03.385290] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:00.310 12:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.310 12:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:00.310 12:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:00.310 12:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:00.310 12:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:00.310 12:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:00.310 12:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:00.310 12:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.310 12:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.310 12:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.310 12:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.310 12:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.310 12:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:00.310 12:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.310 12:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.310 12:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.310 12:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.310 "name": "Existed_Raid", 00:15:00.310 "uuid": "5e80e5a1-02b0-454c-8a43-72fe779664d1", 00:15:00.310 "strip_size_kb": 64, 00:15:00.310 "state": "configuring", 00:15:00.310 "raid_level": "raid5f", 00:15:00.310 "superblock": true, 00:15:00.310 "num_base_bdevs": 3, 00:15:00.310 "num_base_bdevs_discovered": 0, 00:15:00.310 "num_base_bdevs_operational": 3, 00:15:00.310 "base_bdevs_list": [ 00:15:00.310 { 00:15:00.310 "name": "BaseBdev1", 00:15:00.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.310 "is_configured": false, 00:15:00.310 "data_offset": 0, 00:15:00.310 "data_size": 0 00:15:00.310 }, 00:15:00.310 { 00:15:00.310 "name": "BaseBdev2", 00:15:00.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.310 "is_configured": false, 00:15:00.310 "data_offset": 0, 00:15:00.310 "data_size": 0 00:15:00.310 }, 00:15:00.310 { 00:15:00.310 "name": "BaseBdev3", 00:15:00.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.310 "is_configured": false, 00:15:00.310 "data_offset": 0, 00:15:00.310 "data_size": 0 00:15:00.310 } 00:15:00.310 ] 00:15:00.310 }' 00:15:00.310 12:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.310 12:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.570 12:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:00.570 12:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.570 12:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.570 [2024-11-19 12:07:03.808383] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:00.570 [2024-11-19 12:07:03.808456] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:00.570 12:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.570 12:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:00.570 12:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.570 12:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.570 [2024-11-19 12:07:03.820363] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:00.570 [2024-11-19 12:07:03.820440] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:00.570 [2024-11-19 12:07:03.820466] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:00.570 [2024-11-19 12:07:03.820488] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:00.570 [2024-11-19 12:07:03.820506] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:00.570 [2024-11-19 12:07:03.820526] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:00.570 12:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.570 12:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:00.570 12:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.570 12:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.570 [2024-11-19 12:07:03.866437] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:00.570 BaseBdev1 00:15:00.570 12:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.570 12:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:00.570 12:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:00.570 12:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:00.570 12:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:00.570 12:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:00.570 12:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:00.570 12:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:00.570 12:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.570 12:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.570 12:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.570 12:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:00.570 12:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.570 12:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.570 [ 00:15:00.570 { 00:15:00.570 "name": "BaseBdev1", 00:15:00.570 "aliases": [ 00:15:00.570 "598bd6eb-d8b3-41bb-829a-0cfd121c2cc6" 00:15:00.570 ], 00:15:00.570 "product_name": "Malloc disk", 00:15:00.570 "block_size": 512, 00:15:00.570 "num_blocks": 65536, 00:15:00.570 "uuid": "598bd6eb-d8b3-41bb-829a-0cfd121c2cc6", 00:15:00.570 "assigned_rate_limits": { 00:15:00.570 "rw_ios_per_sec": 0, 00:15:00.570 "rw_mbytes_per_sec": 0, 00:15:00.570 "r_mbytes_per_sec": 0, 00:15:00.570 "w_mbytes_per_sec": 0 00:15:00.570 }, 00:15:00.570 "claimed": true, 00:15:00.570 "claim_type": "exclusive_write", 00:15:00.570 "zoned": false, 00:15:00.570 "supported_io_types": { 00:15:00.570 "read": true, 00:15:00.570 "write": true, 00:15:00.570 "unmap": true, 00:15:00.570 "flush": true, 00:15:00.570 "reset": true, 00:15:00.570 "nvme_admin": false, 00:15:00.570 "nvme_io": false, 00:15:00.570 "nvme_io_md": false, 00:15:00.570 "write_zeroes": true, 00:15:00.570 "zcopy": true, 00:15:00.570 "get_zone_info": false, 00:15:00.570 "zone_management": false, 00:15:00.570 "zone_append": false, 00:15:00.570 "compare": false, 00:15:00.570 "compare_and_write": false, 00:15:00.570 "abort": true, 00:15:00.570 "seek_hole": false, 00:15:00.570 "seek_data": false, 00:15:00.570 "copy": true, 00:15:00.570 "nvme_iov_md": false 00:15:00.570 }, 00:15:00.570 "memory_domains": [ 00:15:00.570 { 00:15:00.570 "dma_device_id": "system", 00:15:00.570 "dma_device_type": 1 00:15:00.570 }, 00:15:00.570 { 00:15:00.570 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.570 "dma_device_type": 2 00:15:00.570 } 00:15:00.570 ], 00:15:00.570 "driver_specific": {} 00:15:00.570 } 00:15:00.570 ] 00:15:00.570 12:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.570 12:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:00.570 12:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:00.570 12:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:00.570 12:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:00.570 12:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:00.570 12:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:00.570 12:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:00.570 12:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.570 12:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.570 12:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.570 12:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.570 12:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.570 12:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:00.570 12:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.570 12:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.570 12:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.830 12:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.830 "name": "Existed_Raid", 00:15:00.830 "uuid": "a882fe6d-7c9c-4050-b9af-8eb81e8598ba", 00:15:00.830 "strip_size_kb": 64, 00:15:00.830 "state": "configuring", 00:15:00.830 "raid_level": "raid5f", 00:15:00.830 "superblock": true, 00:15:00.830 "num_base_bdevs": 3, 00:15:00.830 "num_base_bdevs_discovered": 1, 00:15:00.830 "num_base_bdevs_operational": 3, 00:15:00.830 "base_bdevs_list": [ 00:15:00.830 { 00:15:00.830 "name": "BaseBdev1", 00:15:00.830 "uuid": "598bd6eb-d8b3-41bb-829a-0cfd121c2cc6", 00:15:00.830 "is_configured": true, 00:15:00.830 "data_offset": 2048, 00:15:00.830 "data_size": 63488 00:15:00.830 }, 00:15:00.830 { 00:15:00.830 "name": "BaseBdev2", 00:15:00.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.830 "is_configured": false, 00:15:00.830 "data_offset": 0, 00:15:00.830 "data_size": 0 00:15:00.830 }, 00:15:00.830 { 00:15:00.830 "name": "BaseBdev3", 00:15:00.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.830 "is_configured": false, 00:15:00.830 "data_offset": 0, 00:15:00.830 "data_size": 0 00:15:00.830 } 00:15:00.830 ] 00:15:00.830 }' 00:15:00.830 12:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.830 12:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.089 12:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:01.089 12:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.089 12:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.089 [2024-11-19 12:07:04.345651] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:01.089 [2024-11-19 12:07:04.345702] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:01.089 12:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.089 12:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:01.089 12:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.089 12:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.089 [2024-11-19 12:07:04.357679] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:01.089 [2024-11-19 12:07:04.359511] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:01.089 [2024-11-19 12:07:04.359614] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:01.089 [2024-11-19 12:07:04.359652] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:01.089 [2024-11-19 12:07:04.359679] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:01.089 12:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.089 12:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:01.089 12:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:01.089 12:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:01.089 12:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:01.089 12:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:01.089 12:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:01.089 12:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:01.089 12:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:01.089 12:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.089 12:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.089 12:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.089 12:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.089 12:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.089 12:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:01.089 12:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.089 12:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.089 12:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.089 12:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.089 "name": "Existed_Raid", 00:15:01.089 "uuid": "7cb7b711-d49a-47a3-9598-7f5ef7b15d98", 00:15:01.089 "strip_size_kb": 64, 00:15:01.089 "state": "configuring", 00:15:01.089 "raid_level": "raid5f", 00:15:01.089 "superblock": true, 00:15:01.089 "num_base_bdevs": 3, 00:15:01.089 "num_base_bdevs_discovered": 1, 00:15:01.089 "num_base_bdevs_operational": 3, 00:15:01.089 "base_bdevs_list": [ 00:15:01.089 { 00:15:01.089 "name": "BaseBdev1", 00:15:01.089 "uuid": "598bd6eb-d8b3-41bb-829a-0cfd121c2cc6", 00:15:01.089 "is_configured": true, 00:15:01.089 "data_offset": 2048, 00:15:01.089 "data_size": 63488 00:15:01.089 }, 00:15:01.089 { 00:15:01.089 "name": "BaseBdev2", 00:15:01.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.089 "is_configured": false, 00:15:01.089 "data_offset": 0, 00:15:01.089 "data_size": 0 00:15:01.089 }, 00:15:01.089 { 00:15:01.089 "name": "BaseBdev3", 00:15:01.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.089 "is_configured": false, 00:15:01.089 "data_offset": 0, 00:15:01.089 "data_size": 0 00:15:01.089 } 00:15:01.089 ] 00:15:01.089 }' 00:15:01.089 12:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.089 12:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.657 12:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:01.657 12:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.657 12:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.657 [2024-11-19 12:07:04.794238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:01.657 BaseBdev2 00:15:01.657 12:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.657 12:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:01.657 12:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:01.657 12:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:01.657 12:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:01.657 12:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:01.657 12:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:01.657 12:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:01.657 12:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.657 12:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.657 12:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.657 12:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:01.657 12:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.657 12:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.657 [ 00:15:01.657 { 00:15:01.657 "name": "BaseBdev2", 00:15:01.657 "aliases": [ 00:15:01.657 "0df90f44-8d1d-4d68-8e7b-3930fca65211" 00:15:01.657 ], 00:15:01.657 "product_name": "Malloc disk", 00:15:01.657 "block_size": 512, 00:15:01.658 "num_blocks": 65536, 00:15:01.658 "uuid": "0df90f44-8d1d-4d68-8e7b-3930fca65211", 00:15:01.658 "assigned_rate_limits": { 00:15:01.658 "rw_ios_per_sec": 0, 00:15:01.658 "rw_mbytes_per_sec": 0, 00:15:01.658 "r_mbytes_per_sec": 0, 00:15:01.658 "w_mbytes_per_sec": 0 00:15:01.658 }, 00:15:01.658 "claimed": true, 00:15:01.658 "claim_type": "exclusive_write", 00:15:01.658 "zoned": false, 00:15:01.658 "supported_io_types": { 00:15:01.658 "read": true, 00:15:01.658 "write": true, 00:15:01.658 "unmap": true, 00:15:01.658 "flush": true, 00:15:01.658 "reset": true, 00:15:01.658 "nvme_admin": false, 00:15:01.658 "nvme_io": false, 00:15:01.658 "nvme_io_md": false, 00:15:01.658 "write_zeroes": true, 00:15:01.658 "zcopy": true, 00:15:01.658 "get_zone_info": false, 00:15:01.658 "zone_management": false, 00:15:01.658 "zone_append": false, 00:15:01.658 "compare": false, 00:15:01.658 "compare_and_write": false, 00:15:01.658 "abort": true, 00:15:01.658 "seek_hole": false, 00:15:01.658 "seek_data": false, 00:15:01.658 "copy": true, 00:15:01.658 "nvme_iov_md": false 00:15:01.658 }, 00:15:01.658 "memory_domains": [ 00:15:01.658 { 00:15:01.658 "dma_device_id": "system", 00:15:01.658 "dma_device_type": 1 00:15:01.658 }, 00:15:01.658 { 00:15:01.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.658 "dma_device_type": 2 00:15:01.658 } 00:15:01.658 ], 00:15:01.658 "driver_specific": {} 00:15:01.658 } 00:15:01.658 ] 00:15:01.658 12:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.658 12:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:01.658 12:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:01.658 12:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:01.658 12:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:01.658 12:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:01.658 12:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:01.658 12:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:01.658 12:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:01.658 12:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:01.658 12:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.658 12:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.658 12:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.658 12:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.658 12:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.658 12:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:01.658 12:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.658 12:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.658 12:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.658 12:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.658 "name": "Existed_Raid", 00:15:01.658 "uuid": "7cb7b711-d49a-47a3-9598-7f5ef7b15d98", 00:15:01.658 "strip_size_kb": 64, 00:15:01.658 "state": "configuring", 00:15:01.658 "raid_level": "raid5f", 00:15:01.658 "superblock": true, 00:15:01.658 "num_base_bdevs": 3, 00:15:01.658 "num_base_bdevs_discovered": 2, 00:15:01.658 "num_base_bdevs_operational": 3, 00:15:01.658 "base_bdevs_list": [ 00:15:01.658 { 00:15:01.658 "name": "BaseBdev1", 00:15:01.658 "uuid": "598bd6eb-d8b3-41bb-829a-0cfd121c2cc6", 00:15:01.658 "is_configured": true, 00:15:01.658 "data_offset": 2048, 00:15:01.658 "data_size": 63488 00:15:01.658 }, 00:15:01.658 { 00:15:01.658 "name": "BaseBdev2", 00:15:01.658 "uuid": "0df90f44-8d1d-4d68-8e7b-3930fca65211", 00:15:01.658 "is_configured": true, 00:15:01.658 "data_offset": 2048, 00:15:01.658 "data_size": 63488 00:15:01.658 }, 00:15:01.658 { 00:15:01.658 "name": "BaseBdev3", 00:15:01.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.658 "is_configured": false, 00:15:01.658 "data_offset": 0, 00:15:01.658 "data_size": 0 00:15:01.658 } 00:15:01.658 ] 00:15:01.658 }' 00:15:01.658 12:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.658 12:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.917 12:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:01.917 12:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.917 12:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.177 [2024-11-19 12:07:05.354699] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:02.177 [2024-11-19 12:07:05.354951] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:02.177 [2024-11-19 12:07:05.354975] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:02.177 [2024-11-19 12:07:05.355292] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:02.177 BaseBdev3 00:15:02.177 12:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.177 12:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:02.177 12:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:02.177 12:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:02.177 12:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:02.177 12:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:02.177 12:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:02.177 12:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:02.177 12:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.177 12:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.177 [2024-11-19 12:07:05.360840] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:02.177 [2024-11-19 12:07:05.360904] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:02.177 [2024-11-19 12:07:05.361134] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:02.177 12:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.177 12:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:02.177 12:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.177 12:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.177 [ 00:15:02.177 { 00:15:02.177 "name": "BaseBdev3", 00:15:02.177 "aliases": [ 00:15:02.177 "9e0a8eca-a304-46fd-a443-8f20987b3440" 00:15:02.177 ], 00:15:02.177 "product_name": "Malloc disk", 00:15:02.177 "block_size": 512, 00:15:02.177 "num_blocks": 65536, 00:15:02.177 "uuid": "9e0a8eca-a304-46fd-a443-8f20987b3440", 00:15:02.177 "assigned_rate_limits": { 00:15:02.177 "rw_ios_per_sec": 0, 00:15:02.178 "rw_mbytes_per_sec": 0, 00:15:02.178 "r_mbytes_per_sec": 0, 00:15:02.178 "w_mbytes_per_sec": 0 00:15:02.178 }, 00:15:02.178 "claimed": true, 00:15:02.178 "claim_type": "exclusive_write", 00:15:02.178 "zoned": false, 00:15:02.178 "supported_io_types": { 00:15:02.178 "read": true, 00:15:02.178 "write": true, 00:15:02.178 "unmap": true, 00:15:02.178 "flush": true, 00:15:02.178 "reset": true, 00:15:02.178 "nvme_admin": false, 00:15:02.178 "nvme_io": false, 00:15:02.178 "nvme_io_md": false, 00:15:02.178 "write_zeroes": true, 00:15:02.178 "zcopy": true, 00:15:02.178 "get_zone_info": false, 00:15:02.178 "zone_management": false, 00:15:02.178 "zone_append": false, 00:15:02.178 "compare": false, 00:15:02.178 "compare_and_write": false, 00:15:02.178 "abort": true, 00:15:02.178 "seek_hole": false, 00:15:02.178 "seek_data": false, 00:15:02.178 "copy": true, 00:15:02.178 "nvme_iov_md": false 00:15:02.178 }, 00:15:02.178 "memory_domains": [ 00:15:02.178 { 00:15:02.178 "dma_device_id": "system", 00:15:02.178 "dma_device_type": 1 00:15:02.178 }, 00:15:02.178 { 00:15:02.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:02.178 "dma_device_type": 2 00:15:02.178 } 00:15:02.178 ], 00:15:02.178 "driver_specific": {} 00:15:02.178 } 00:15:02.178 ] 00:15:02.178 12:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.178 12:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:02.178 12:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:02.178 12:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:02.178 12:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:02.178 12:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:02.178 12:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:02.178 12:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:02.178 12:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:02.178 12:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:02.178 12:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.178 12:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.178 12:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.178 12:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.178 12:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.178 12:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:02.178 12:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.178 12:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.178 12:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.178 12:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.178 "name": "Existed_Raid", 00:15:02.178 "uuid": "7cb7b711-d49a-47a3-9598-7f5ef7b15d98", 00:15:02.178 "strip_size_kb": 64, 00:15:02.178 "state": "online", 00:15:02.178 "raid_level": "raid5f", 00:15:02.178 "superblock": true, 00:15:02.178 "num_base_bdevs": 3, 00:15:02.178 "num_base_bdevs_discovered": 3, 00:15:02.178 "num_base_bdevs_operational": 3, 00:15:02.178 "base_bdevs_list": [ 00:15:02.178 { 00:15:02.178 "name": "BaseBdev1", 00:15:02.178 "uuid": "598bd6eb-d8b3-41bb-829a-0cfd121c2cc6", 00:15:02.178 "is_configured": true, 00:15:02.178 "data_offset": 2048, 00:15:02.178 "data_size": 63488 00:15:02.178 }, 00:15:02.178 { 00:15:02.178 "name": "BaseBdev2", 00:15:02.178 "uuid": "0df90f44-8d1d-4d68-8e7b-3930fca65211", 00:15:02.178 "is_configured": true, 00:15:02.178 "data_offset": 2048, 00:15:02.178 "data_size": 63488 00:15:02.178 }, 00:15:02.178 { 00:15:02.178 "name": "BaseBdev3", 00:15:02.178 "uuid": "9e0a8eca-a304-46fd-a443-8f20987b3440", 00:15:02.178 "is_configured": true, 00:15:02.178 "data_offset": 2048, 00:15:02.178 "data_size": 63488 00:15:02.178 } 00:15:02.178 ] 00:15:02.178 }' 00:15:02.178 12:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.178 12:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.747 12:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:02.747 12:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:02.747 12:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:02.747 12:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:02.747 12:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:02.747 12:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:02.747 12:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:02.747 12:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:02.747 12:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.747 12:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.747 [2024-11-19 12:07:05.846457] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:02.747 12:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.747 12:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:02.747 "name": "Existed_Raid", 00:15:02.747 "aliases": [ 00:15:02.747 "7cb7b711-d49a-47a3-9598-7f5ef7b15d98" 00:15:02.747 ], 00:15:02.747 "product_name": "Raid Volume", 00:15:02.747 "block_size": 512, 00:15:02.747 "num_blocks": 126976, 00:15:02.747 "uuid": "7cb7b711-d49a-47a3-9598-7f5ef7b15d98", 00:15:02.747 "assigned_rate_limits": { 00:15:02.747 "rw_ios_per_sec": 0, 00:15:02.747 "rw_mbytes_per_sec": 0, 00:15:02.747 "r_mbytes_per_sec": 0, 00:15:02.747 "w_mbytes_per_sec": 0 00:15:02.747 }, 00:15:02.747 "claimed": false, 00:15:02.747 "zoned": false, 00:15:02.747 "supported_io_types": { 00:15:02.747 "read": true, 00:15:02.747 "write": true, 00:15:02.747 "unmap": false, 00:15:02.747 "flush": false, 00:15:02.747 "reset": true, 00:15:02.747 "nvme_admin": false, 00:15:02.747 "nvme_io": false, 00:15:02.747 "nvme_io_md": false, 00:15:02.747 "write_zeroes": true, 00:15:02.747 "zcopy": false, 00:15:02.747 "get_zone_info": false, 00:15:02.747 "zone_management": false, 00:15:02.747 "zone_append": false, 00:15:02.747 "compare": false, 00:15:02.747 "compare_and_write": false, 00:15:02.747 "abort": false, 00:15:02.747 "seek_hole": false, 00:15:02.747 "seek_data": false, 00:15:02.747 "copy": false, 00:15:02.747 "nvme_iov_md": false 00:15:02.747 }, 00:15:02.747 "driver_specific": { 00:15:02.747 "raid": { 00:15:02.747 "uuid": "7cb7b711-d49a-47a3-9598-7f5ef7b15d98", 00:15:02.747 "strip_size_kb": 64, 00:15:02.747 "state": "online", 00:15:02.747 "raid_level": "raid5f", 00:15:02.747 "superblock": true, 00:15:02.747 "num_base_bdevs": 3, 00:15:02.747 "num_base_bdevs_discovered": 3, 00:15:02.747 "num_base_bdevs_operational": 3, 00:15:02.747 "base_bdevs_list": [ 00:15:02.747 { 00:15:02.747 "name": "BaseBdev1", 00:15:02.747 "uuid": "598bd6eb-d8b3-41bb-829a-0cfd121c2cc6", 00:15:02.747 "is_configured": true, 00:15:02.747 "data_offset": 2048, 00:15:02.747 "data_size": 63488 00:15:02.747 }, 00:15:02.747 { 00:15:02.747 "name": "BaseBdev2", 00:15:02.747 "uuid": "0df90f44-8d1d-4d68-8e7b-3930fca65211", 00:15:02.747 "is_configured": true, 00:15:02.747 "data_offset": 2048, 00:15:02.747 "data_size": 63488 00:15:02.747 }, 00:15:02.747 { 00:15:02.747 "name": "BaseBdev3", 00:15:02.747 "uuid": "9e0a8eca-a304-46fd-a443-8f20987b3440", 00:15:02.747 "is_configured": true, 00:15:02.747 "data_offset": 2048, 00:15:02.747 "data_size": 63488 00:15:02.747 } 00:15:02.747 ] 00:15:02.747 } 00:15:02.747 } 00:15:02.747 }' 00:15:02.747 12:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:02.747 12:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:02.747 BaseBdev2 00:15:02.747 BaseBdev3' 00:15:02.747 12:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:02.747 12:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:02.747 12:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:02.747 12:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:02.747 12:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:02.747 12:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.747 12:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.747 12:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.747 12:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:02.747 12:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:02.747 12:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:02.747 12:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:02.747 12:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.747 12:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.747 12:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:02.747 12:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.747 12:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:02.747 12:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:02.747 12:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:02.747 12:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:02.747 12:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:02.747 12:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.747 12:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.747 12:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.747 12:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:02.747 12:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:02.747 12:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:02.747 12:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.747 12:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.748 [2024-11-19 12:07:06.065934] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:03.006 12:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.006 12:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:03.006 12:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:03.006 12:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:03.006 12:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:15:03.006 12:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:03.006 12:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:15:03.006 12:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:03.006 12:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:03.006 12:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:03.006 12:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:03.006 12:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:03.006 12:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.006 12:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.006 12:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.006 12:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.006 12:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.006 12:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:03.006 12:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.006 12:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.006 12:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.006 12:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.006 "name": "Existed_Raid", 00:15:03.006 "uuid": "7cb7b711-d49a-47a3-9598-7f5ef7b15d98", 00:15:03.006 "strip_size_kb": 64, 00:15:03.006 "state": "online", 00:15:03.006 "raid_level": "raid5f", 00:15:03.006 "superblock": true, 00:15:03.006 "num_base_bdevs": 3, 00:15:03.006 "num_base_bdevs_discovered": 2, 00:15:03.006 "num_base_bdevs_operational": 2, 00:15:03.006 "base_bdevs_list": [ 00:15:03.006 { 00:15:03.006 "name": null, 00:15:03.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.006 "is_configured": false, 00:15:03.006 "data_offset": 0, 00:15:03.006 "data_size": 63488 00:15:03.006 }, 00:15:03.006 { 00:15:03.006 "name": "BaseBdev2", 00:15:03.006 "uuid": "0df90f44-8d1d-4d68-8e7b-3930fca65211", 00:15:03.006 "is_configured": true, 00:15:03.006 "data_offset": 2048, 00:15:03.006 "data_size": 63488 00:15:03.006 }, 00:15:03.006 { 00:15:03.006 "name": "BaseBdev3", 00:15:03.006 "uuid": "9e0a8eca-a304-46fd-a443-8f20987b3440", 00:15:03.006 "is_configured": true, 00:15:03.006 "data_offset": 2048, 00:15:03.006 "data_size": 63488 00:15:03.006 } 00:15:03.006 ] 00:15:03.006 }' 00:15:03.006 12:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.006 12:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.265 12:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:03.265 12:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:03.265 12:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.265 12:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.265 12:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.265 12:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:03.265 12:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.265 12:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:03.265 12:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:03.265 12:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:03.265 12:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.265 12:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.265 [2024-11-19 12:07:06.571461] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:03.265 [2024-11-19 12:07:06.571611] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:03.524 [2024-11-19 12:07:06.661443] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:03.524 12:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.524 12:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:03.524 12:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:03.524 12:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.524 12:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.524 12:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.524 12:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:03.524 12:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.524 12:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:03.524 12:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:03.524 12:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:03.524 12:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.524 12:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.524 [2024-11-19 12:07:06.709400] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:03.524 [2024-11-19 12:07:06.709445] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:03.524 12:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.524 12:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:03.524 12:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:03.524 12:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.524 12:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.524 12:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.524 12:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:03.524 12:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.524 12:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:03.524 12:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:03.524 12:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:15:03.524 12:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:03.524 12:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:03.524 12:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:03.524 12:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.524 12:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.524 BaseBdev2 00:15:03.784 12:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.784 12:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:03.784 12:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:03.784 12:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:03.784 12:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:03.784 12:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:03.784 12:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:03.784 12:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:03.784 12:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.784 12:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.784 12:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.784 12:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:03.784 12:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.784 12:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.784 [ 00:15:03.784 { 00:15:03.784 "name": "BaseBdev2", 00:15:03.784 "aliases": [ 00:15:03.784 "c05b9567-831c-4c1b-8a47-a72b936ece03" 00:15:03.784 ], 00:15:03.784 "product_name": "Malloc disk", 00:15:03.784 "block_size": 512, 00:15:03.784 "num_blocks": 65536, 00:15:03.784 "uuid": "c05b9567-831c-4c1b-8a47-a72b936ece03", 00:15:03.784 "assigned_rate_limits": { 00:15:03.784 "rw_ios_per_sec": 0, 00:15:03.784 "rw_mbytes_per_sec": 0, 00:15:03.784 "r_mbytes_per_sec": 0, 00:15:03.784 "w_mbytes_per_sec": 0 00:15:03.784 }, 00:15:03.784 "claimed": false, 00:15:03.784 "zoned": false, 00:15:03.784 "supported_io_types": { 00:15:03.784 "read": true, 00:15:03.784 "write": true, 00:15:03.784 "unmap": true, 00:15:03.784 "flush": true, 00:15:03.784 "reset": true, 00:15:03.784 "nvme_admin": false, 00:15:03.784 "nvme_io": false, 00:15:03.784 "nvme_io_md": false, 00:15:03.784 "write_zeroes": true, 00:15:03.784 "zcopy": true, 00:15:03.784 "get_zone_info": false, 00:15:03.784 "zone_management": false, 00:15:03.784 "zone_append": false, 00:15:03.784 "compare": false, 00:15:03.784 "compare_and_write": false, 00:15:03.784 "abort": true, 00:15:03.784 "seek_hole": false, 00:15:03.784 "seek_data": false, 00:15:03.784 "copy": true, 00:15:03.784 "nvme_iov_md": false 00:15:03.784 }, 00:15:03.784 "memory_domains": [ 00:15:03.784 { 00:15:03.784 "dma_device_id": "system", 00:15:03.784 "dma_device_type": 1 00:15:03.784 }, 00:15:03.784 { 00:15:03.784 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:03.784 "dma_device_type": 2 00:15:03.784 } 00:15:03.784 ], 00:15:03.784 "driver_specific": {} 00:15:03.784 } 00:15:03.784 ] 00:15:03.784 12:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.784 12:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:03.784 12:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:03.784 12:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:03.784 12:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:03.784 12:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.785 12:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.785 BaseBdev3 00:15:03.785 12:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.785 12:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:03.785 12:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:03.785 12:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:03.785 12:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:03.785 12:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:03.785 12:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:03.785 12:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:03.785 12:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.785 12:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.785 12:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.785 12:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:03.785 12:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.785 12:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.785 [ 00:15:03.785 { 00:15:03.785 "name": "BaseBdev3", 00:15:03.785 "aliases": [ 00:15:03.785 "4945a542-7052-4570-a17d-1a06e55749d9" 00:15:03.785 ], 00:15:03.785 "product_name": "Malloc disk", 00:15:03.785 "block_size": 512, 00:15:03.785 "num_blocks": 65536, 00:15:03.785 "uuid": "4945a542-7052-4570-a17d-1a06e55749d9", 00:15:03.785 "assigned_rate_limits": { 00:15:03.785 "rw_ios_per_sec": 0, 00:15:03.785 "rw_mbytes_per_sec": 0, 00:15:03.785 "r_mbytes_per_sec": 0, 00:15:03.785 "w_mbytes_per_sec": 0 00:15:03.785 }, 00:15:03.785 "claimed": false, 00:15:03.785 "zoned": false, 00:15:03.785 "supported_io_types": { 00:15:03.785 "read": true, 00:15:03.785 "write": true, 00:15:03.785 "unmap": true, 00:15:03.785 "flush": true, 00:15:03.785 "reset": true, 00:15:03.785 "nvme_admin": false, 00:15:03.785 "nvme_io": false, 00:15:03.785 "nvme_io_md": false, 00:15:03.785 "write_zeroes": true, 00:15:03.785 "zcopy": true, 00:15:03.785 "get_zone_info": false, 00:15:03.785 "zone_management": false, 00:15:03.785 "zone_append": false, 00:15:03.785 "compare": false, 00:15:03.785 "compare_and_write": false, 00:15:03.785 "abort": true, 00:15:03.785 "seek_hole": false, 00:15:03.785 "seek_data": false, 00:15:03.785 "copy": true, 00:15:03.785 "nvme_iov_md": false 00:15:03.785 }, 00:15:03.785 "memory_domains": [ 00:15:03.785 { 00:15:03.785 "dma_device_id": "system", 00:15:03.785 "dma_device_type": 1 00:15:03.785 }, 00:15:03.785 { 00:15:03.785 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:03.785 "dma_device_type": 2 00:15:03.785 } 00:15:03.785 ], 00:15:03.785 "driver_specific": {} 00:15:03.785 } 00:15:03.785 ] 00:15:03.785 12:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.785 12:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:03.785 12:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:03.785 12:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:03.785 12:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:03.785 12:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.785 12:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.785 [2024-11-19 12:07:07.017020] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:03.785 [2024-11-19 12:07:07.017106] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:03.785 [2024-11-19 12:07:07.017164] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:03.785 [2024-11-19 12:07:07.018885] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:03.785 12:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.785 12:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:03.785 12:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:03.785 12:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:03.785 12:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:03.785 12:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:03.785 12:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:03.785 12:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.785 12:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.785 12:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.785 12:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.785 12:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.785 12:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:03.785 12:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.785 12:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.785 12:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.785 12:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.785 "name": "Existed_Raid", 00:15:03.785 "uuid": "a8c42318-868b-466f-bace-eb275458bf40", 00:15:03.785 "strip_size_kb": 64, 00:15:03.785 "state": "configuring", 00:15:03.785 "raid_level": "raid5f", 00:15:03.785 "superblock": true, 00:15:03.785 "num_base_bdevs": 3, 00:15:03.785 "num_base_bdevs_discovered": 2, 00:15:03.785 "num_base_bdevs_operational": 3, 00:15:03.785 "base_bdevs_list": [ 00:15:03.785 { 00:15:03.785 "name": "BaseBdev1", 00:15:03.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.785 "is_configured": false, 00:15:03.785 "data_offset": 0, 00:15:03.785 "data_size": 0 00:15:03.785 }, 00:15:03.785 { 00:15:03.785 "name": "BaseBdev2", 00:15:03.785 "uuid": "c05b9567-831c-4c1b-8a47-a72b936ece03", 00:15:03.785 "is_configured": true, 00:15:03.785 "data_offset": 2048, 00:15:03.785 "data_size": 63488 00:15:03.785 }, 00:15:03.785 { 00:15:03.785 "name": "BaseBdev3", 00:15:03.785 "uuid": "4945a542-7052-4570-a17d-1a06e55749d9", 00:15:03.785 "is_configured": true, 00:15:03.785 "data_offset": 2048, 00:15:03.785 "data_size": 63488 00:15:03.785 } 00:15:03.785 ] 00:15:03.785 }' 00:15:03.785 12:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.785 12:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.353 12:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:04.353 12:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.353 12:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.353 [2024-11-19 12:07:07.476233] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:04.353 12:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.353 12:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:04.353 12:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:04.353 12:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:04.353 12:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:04.353 12:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:04.353 12:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:04.353 12:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:04.353 12:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:04.353 12:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:04.353 12:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:04.353 12:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.353 12:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:04.353 12:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.353 12:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.353 12:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.353 12:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:04.353 "name": "Existed_Raid", 00:15:04.353 "uuid": "a8c42318-868b-466f-bace-eb275458bf40", 00:15:04.353 "strip_size_kb": 64, 00:15:04.353 "state": "configuring", 00:15:04.353 "raid_level": "raid5f", 00:15:04.353 "superblock": true, 00:15:04.353 "num_base_bdevs": 3, 00:15:04.353 "num_base_bdevs_discovered": 1, 00:15:04.353 "num_base_bdevs_operational": 3, 00:15:04.353 "base_bdevs_list": [ 00:15:04.353 { 00:15:04.353 "name": "BaseBdev1", 00:15:04.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.353 "is_configured": false, 00:15:04.353 "data_offset": 0, 00:15:04.353 "data_size": 0 00:15:04.353 }, 00:15:04.353 { 00:15:04.353 "name": null, 00:15:04.353 "uuid": "c05b9567-831c-4c1b-8a47-a72b936ece03", 00:15:04.353 "is_configured": false, 00:15:04.353 "data_offset": 0, 00:15:04.353 "data_size": 63488 00:15:04.353 }, 00:15:04.353 { 00:15:04.353 "name": "BaseBdev3", 00:15:04.353 "uuid": "4945a542-7052-4570-a17d-1a06e55749d9", 00:15:04.353 "is_configured": true, 00:15:04.353 "data_offset": 2048, 00:15:04.353 "data_size": 63488 00:15:04.353 } 00:15:04.353 ] 00:15:04.353 }' 00:15:04.353 12:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:04.353 12:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.614 12:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.614 12:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:04.614 12:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.614 12:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.614 12:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.614 12:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:04.614 12:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:04.614 12:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.614 12:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.614 [2024-11-19 12:07:07.950814] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:04.614 BaseBdev1 00:15:04.614 12:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.614 12:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:04.614 12:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:04.614 12:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:04.614 12:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:04.614 12:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:04.614 12:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:04.614 12:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:04.614 12:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.614 12:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.614 12:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.614 12:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:04.614 12:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.614 12:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.614 [ 00:15:04.614 { 00:15:04.614 "name": "BaseBdev1", 00:15:04.614 "aliases": [ 00:15:04.614 "46c0985f-b354-4b5e-bf7a-2987ebb7e2fa" 00:15:04.614 ], 00:15:04.614 "product_name": "Malloc disk", 00:15:04.614 "block_size": 512, 00:15:04.614 "num_blocks": 65536, 00:15:04.614 "uuid": "46c0985f-b354-4b5e-bf7a-2987ebb7e2fa", 00:15:04.614 "assigned_rate_limits": { 00:15:04.614 "rw_ios_per_sec": 0, 00:15:04.614 "rw_mbytes_per_sec": 0, 00:15:04.614 "r_mbytes_per_sec": 0, 00:15:04.614 "w_mbytes_per_sec": 0 00:15:04.614 }, 00:15:04.614 "claimed": true, 00:15:04.614 "claim_type": "exclusive_write", 00:15:04.614 "zoned": false, 00:15:04.614 "supported_io_types": { 00:15:04.614 "read": true, 00:15:04.614 "write": true, 00:15:04.614 "unmap": true, 00:15:04.614 "flush": true, 00:15:04.614 "reset": true, 00:15:04.614 "nvme_admin": false, 00:15:04.614 "nvme_io": false, 00:15:04.614 "nvme_io_md": false, 00:15:04.614 "write_zeroes": true, 00:15:04.614 "zcopy": true, 00:15:04.614 "get_zone_info": false, 00:15:04.614 "zone_management": false, 00:15:04.614 "zone_append": false, 00:15:04.614 "compare": false, 00:15:04.614 "compare_and_write": false, 00:15:04.614 "abort": true, 00:15:04.614 "seek_hole": false, 00:15:04.614 "seek_data": false, 00:15:04.614 "copy": true, 00:15:04.614 "nvme_iov_md": false 00:15:04.614 }, 00:15:04.614 "memory_domains": [ 00:15:04.614 { 00:15:04.614 "dma_device_id": "system", 00:15:04.614 "dma_device_type": 1 00:15:04.614 }, 00:15:04.614 { 00:15:04.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.614 "dma_device_type": 2 00:15:04.614 } 00:15:04.614 ], 00:15:04.614 "driver_specific": {} 00:15:04.614 } 00:15:04.614 ] 00:15:04.614 12:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.614 12:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:04.614 12:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:04.614 12:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:04.614 12:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:04.614 12:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:04.614 12:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:04.614 12:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:04.614 12:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:04.614 12:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:04.614 12:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:04.614 12:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:04.614 12:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.614 12:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:04.614 12:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.614 12:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.873 12:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.873 12:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:04.873 "name": "Existed_Raid", 00:15:04.873 "uuid": "a8c42318-868b-466f-bace-eb275458bf40", 00:15:04.873 "strip_size_kb": 64, 00:15:04.873 "state": "configuring", 00:15:04.873 "raid_level": "raid5f", 00:15:04.873 "superblock": true, 00:15:04.873 "num_base_bdevs": 3, 00:15:04.873 "num_base_bdevs_discovered": 2, 00:15:04.873 "num_base_bdevs_operational": 3, 00:15:04.873 "base_bdevs_list": [ 00:15:04.873 { 00:15:04.873 "name": "BaseBdev1", 00:15:04.873 "uuid": "46c0985f-b354-4b5e-bf7a-2987ebb7e2fa", 00:15:04.873 "is_configured": true, 00:15:04.873 "data_offset": 2048, 00:15:04.873 "data_size": 63488 00:15:04.873 }, 00:15:04.873 { 00:15:04.873 "name": null, 00:15:04.873 "uuid": "c05b9567-831c-4c1b-8a47-a72b936ece03", 00:15:04.873 "is_configured": false, 00:15:04.873 "data_offset": 0, 00:15:04.873 "data_size": 63488 00:15:04.873 }, 00:15:04.873 { 00:15:04.873 "name": "BaseBdev3", 00:15:04.873 "uuid": "4945a542-7052-4570-a17d-1a06e55749d9", 00:15:04.873 "is_configured": true, 00:15:04.873 "data_offset": 2048, 00:15:04.873 "data_size": 63488 00:15:04.873 } 00:15:04.873 ] 00:15:04.873 }' 00:15:04.873 12:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:04.873 12:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.133 12:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.133 12:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.133 12:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.133 12:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:05.133 12:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.133 12:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:05.133 12:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:05.133 12:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.133 12:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.133 [2024-11-19 12:07:08.446047] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:05.133 12:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.133 12:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:05.133 12:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:05.133 12:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:05.133 12:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:05.133 12:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:05.133 12:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:05.133 12:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.133 12:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.133 12:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.133 12:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.133 12:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.133 12:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.133 12:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.133 12:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:05.133 12:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.133 12:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.133 "name": "Existed_Raid", 00:15:05.133 "uuid": "a8c42318-868b-466f-bace-eb275458bf40", 00:15:05.133 "strip_size_kb": 64, 00:15:05.133 "state": "configuring", 00:15:05.133 "raid_level": "raid5f", 00:15:05.133 "superblock": true, 00:15:05.133 "num_base_bdevs": 3, 00:15:05.133 "num_base_bdevs_discovered": 1, 00:15:05.133 "num_base_bdevs_operational": 3, 00:15:05.133 "base_bdevs_list": [ 00:15:05.133 { 00:15:05.133 "name": "BaseBdev1", 00:15:05.133 "uuid": "46c0985f-b354-4b5e-bf7a-2987ebb7e2fa", 00:15:05.133 "is_configured": true, 00:15:05.133 "data_offset": 2048, 00:15:05.133 "data_size": 63488 00:15:05.133 }, 00:15:05.133 { 00:15:05.133 "name": null, 00:15:05.133 "uuid": "c05b9567-831c-4c1b-8a47-a72b936ece03", 00:15:05.133 "is_configured": false, 00:15:05.133 "data_offset": 0, 00:15:05.133 "data_size": 63488 00:15:05.133 }, 00:15:05.133 { 00:15:05.133 "name": null, 00:15:05.133 "uuid": "4945a542-7052-4570-a17d-1a06e55749d9", 00:15:05.133 "is_configured": false, 00:15:05.133 "data_offset": 0, 00:15:05.133 "data_size": 63488 00:15:05.133 } 00:15:05.133 ] 00:15:05.133 }' 00:15:05.133 12:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.133 12:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.701 12:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.701 12:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.701 12:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.701 12:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:05.701 12:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.701 12:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:05.701 12:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:05.701 12:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.701 12:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.701 [2024-11-19 12:07:08.941205] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:05.701 12:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.701 12:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:05.701 12:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:05.701 12:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:05.701 12:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:05.701 12:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:05.701 12:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:05.701 12:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.701 12:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.701 12:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.702 12:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.702 12:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.702 12:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:05.702 12:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.702 12:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.702 12:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.702 12:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.702 "name": "Existed_Raid", 00:15:05.702 "uuid": "a8c42318-868b-466f-bace-eb275458bf40", 00:15:05.702 "strip_size_kb": 64, 00:15:05.702 "state": "configuring", 00:15:05.702 "raid_level": "raid5f", 00:15:05.702 "superblock": true, 00:15:05.702 "num_base_bdevs": 3, 00:15:05.702 "num_base_bdevs_discovered": 2, 00:15:05.702 "num_base_bdevs_operational": 3, 00:15:05.702 "base_bdevs_list": [ 00:15:05.702 { 00:15:05.702 "name": "BaseBdev1", 00:15:05.702 "uuid": "46c0985f-b354-4b5e-bf7a-2987ebb7e2fa", 00:15:05.702 "is_configured": true, 00:15:05.702 "data_offset": 2048, 00:15:05.702 "data_size": 63488 00:15:05.702 }, 00:15:05.702 { 00:15:05.702 "name": null, 00:15:05.702 "uuid": "c05b9567-831c-4c1b-8a47-a72b936ece03", 00:15:05.702 "is_configured": false, 00:15:05.702 "data_offset": 0, 00:15:05.702 "data_size": 63488 00:15:05.702 }, 00:15:05.702 { 00:15:05.702 "name": "BaseBdev3", 00:15:05.702 "uuid": "4945a542-7052-4570-a17d-1a06e55749d9", 00:15:05.702 "is_configured": true, 00:15:05.702 "data_offset": 2048, 00:15:05.702 "data_size": 63488 00:15:05.702 } 00:15:05.702 ] 00:15:05.702 }' 00:15:05.702 12:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.702 12:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.269 12:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.269 12:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.269 12:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.269 12:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:06.269 12:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.269 12:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:06.269 12:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:06.269 12:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.269 12:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.269 [2024-11-19 12:07:09.392445] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:06.269 12:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.269 12:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:06.269 12:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:06.269 12:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:06.269 12:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:06.269 12:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:06.269 12:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:06.269 12:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.269 12:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.269 12:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.269 12:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.269 12:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.269 12:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:06.269 12:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.269 12:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.269 12:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.269 12:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.269 "name": "Existed_Raid", 00:15:06.269 "uuid": "a8c42318-868b-466f-bace-eb275458bf40", 00:15:06.269 "strip_size_kb": 64, 00:15:06.269 "state": "configuring", 00:15:06.269 "raid_level": "raid5f", 00:15:06.269 "superblock": true, 00:15:06.269 "num_base_bdevs": 3, 00:15:06.269 "num_base_bdevs_discovered": 1, 00:15:06.269 "num_base_bdevs_operational": 3, 00:15:06.269 "base_bdevs_list": [ 00:15:06.269 { 00:15:06.269 "name": null, 00:15:06.269 "uuid": "46c0985f-b354-4b5e-bf7a-2987ebb7e2fa", 00:15:06.269 "is_configured": false, 00:15:06.269 "data_offset": 0, 00:15:06.269 "data_size": 63488 00:15:06.269 }, 00:15:06.269 { 00:15:06.269 "name": null, 00:15:06.269 "uuid": "c05b9567-831c-4c1b-8a47-a72b936ece03", 00:15:06.269 "is_configured": false, 00:15:06.269 "data_offset": 0, 00:15:06.269 "data_size": 63488 00:15:06.269 }, 00:15:06.269 { 00:15:06.269 "name": "BaseBdev3", 00:15:06.269 "uuid": "4945a542-7052-4570-a17d-1a06e55749d9", 00:15:06.269 "is_configured": true, 00:15:06.269 "data_offset": 2048, 00:15:06.269 "data_size": 63488 00:15:06.269 } 00:15:06.269 ] 00:15:06.269 }' 00:15:06.269 12:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.269 12:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.837 12:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.837 12:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.837 12:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.837 12:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:06.837 12:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.837 12:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:06.837 12:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:06.837 12:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.837 12:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.837 [2024-11-19 12:07:09.967256] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:06.837 12:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.837 12:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:06.837 12:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:06.837 12:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:06.837 12:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:06.837 12:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:06.837 12:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:06.837 12:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.837 12:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.837 12:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.837 12:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.837 12:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.837 12:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.837 12:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.837 12:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:06.837 12:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.837 12:07:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.837 "name": "Existed_Raid", 00:15:06.837 "uuid": "a8c42318-868b-466f-bace-eb275458bf40", 00:15:06.837 "strip_size_kb": 64, 00:15:06.837 "state": "configuring", 00:15:06.837 "raid_level": "raid5f", 00:15:06.837 "superblock": true, 00:15:06.837 "num_base_bdevs": 3, 00:15:06.837 "num_base_bdevs_discovered": 2, 00:15:06.837 "num_base_bdevs_operational": 3, 00:15:06.837 "base_bdevs_list": [ 00:15:06.837 { 00:15:06.837 "name": null, 00:15:06.837 "uuid": "46c0985f-b354-4b5e-bf7a-2987ebb7e2fa", 00:15:06.837 "is_configured": false, 00:15:06.837 "data_offset": 0, 00:15:06.837 "data_size": 63488 00:15:06.837 }, 00:15:06.837 { 00:15:06.837 "name": "BaseBdev2", 00:15:06.837 "uuid": "c05b9567-831c-4c1b-8a47-a72b936ece03", 00:15:06.837 "is_configured": true, 00:15:06.837 "data_offset": 2048, 00:15:06.837 "data_size": 63488 00:15:06.837 }, 00:15:06.837 { 00:15:06.837 "name": "BaseBdev3", 00:15:06.837 "uuid": "4945a542-7052-4570-a17d-1a06e55749d9", 00:15:06.837 "is_configured": true, 00:15:06.837 "data_offset": 2048, 00:15:06.837 "data_size": 63488 00:15:06.837 } 00:15:06.837 ] 00:15:06.837 }' 00:15:06.837 12:07:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.837 12:07:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.096 12:07:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.096 12:07:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:07.096 12:07:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.096 12:07:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.096 12:07:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.096 12:07:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:07.096 12:07:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.096 12:07:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.097 12:07:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.097 12:07:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:07.097 12:07:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.097 12:07:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 46c0985f-b354-4b5e-bf7a-2987ebb7e2fa 00:15:07.097 12:07:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.097 12:07:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.355 [2024-11-19 12:07:10.484148] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:07.355 [2024-11-19 12:07:10.484359] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:07.355 [2024-11-19 12:07:10.484375] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:07.355 [2024-11-19 12:07:10.484620] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:07.355 NewBaseBdev 00:15:07.355 12:07:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.355 12:07:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:07.355 12:07:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:07.355 12:07:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:07.355 12:07:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:07.355 12:07:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:07.355 12:07:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:07.355 12:07:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:07.355 12:07:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.355 12:07:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.355 [2024-11-19 12:07:10.489809] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:07.355 [2024-11-19 12:07:10.489880] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:07.355 [2024-11-19 12:07:10.490088] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:07.355 12:07:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.355 12:07:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:07.355 12:07:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.355 12:07:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.355 [ 00:15:07.355 { 00:15:07.355 "name": "NewBaseBdev", 00:15:07.355 "aliases": [ 00:15:07.355 "46c0985f-b354-4b5e-bf7a-2987ebb7e2fa" 00:15:07.355 ], 00:15:07.355 "product_name": "Malloc disk", 00:15:07.355 "block_size": 512, 00:15:07.355 "num_blocks": 65536, 00:15:07.355 "uuid": "46c0985f-b354-4b5e-bf7a-2987ebb7e2fa", 00:15:07.355 "assigned_rate_limits": { 00:15:07.355 "rw_ios_per_sec": 0, 00:15:07.355 "rw_mbytes_per_sec": 0, 00:15:07.355 "r_mbytes_per_sec": 0, 00:15:07.355 "w_mbytes_per_sec": 0 00:15:07.355 }, 00:15:07.355 "claimed": true, 00:15:07.355 "claim_type": "exclusive_write", 00:15:07.355 "zoned": false, 00:15:07.355 "supported_io_types": { 00:15:07.355 "read": true, 00:15:07.355 "write": true, 00:15:07.355 "unmap": true, 00:15:07.355 "flush": true, 00:15:07.355 "reset": true, 00:15:07.355 "nvme_admin": false, 00:15:07.355 "nvme_io": false, 00:15:07.355 "nvme_io_md": false, 00:15:07.355 "write_zeroes": true, 00:15:07.355 "zcopy": true, 00:15:07.355 "get_zone_info": false, 00:15:07.355 "zone_management": false, 00:15:07.355 "zone_append": false, 00:15:07.355 "compare": false, 00:15:07.355 "compare_and_write": false, 00:15:07.355 "abort": true, 00:15:07.355 "seek_hole": false, 00:15:07.355 "seek_data": false, 00:15:07.355 "copy": true, 00:15:07.355 "nvme_iov_md": false 00:15:07.355 }, 00:15:07.355 "memory_domains": [ 00:15:07.355 { 00:15:07.356 "dma_device_id": "system", 00:15:07.356 "dma_device_type": 1 00:15:07.356 }, 00:15:07.356 { 00:15:07.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:07.356 "dma_device_type": 2 00:15:07.356 } 00:15:07.356 ], 00:15:07.356 "driver_specific": {} 00:15:07.356 } 00:15:07.356 ] 00:15:07.356 12:07:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.356 12:07:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:07.356 12:07:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:07.356 12:07:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:07.356 12:07:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:07.356 12:07:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:07.356 12:07:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:07.356 12:07:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:07.356 12:07:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:07.356 12:07:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:07.356 12:07:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:07.356 12:07:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:07.356 12:07:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.356 12:07:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:07.356 12:07:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.356 12:07:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.356 12:07:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.356 12:07:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:07.356 "name": "Existed_Raid", 00:15:07.356 "uuid": "a8c42318-868b-466f-bace-eb275458bf40", 00:15:07.356 "strip_size_kb": 64, 00:15:07.356 "state": "online", 00:15:07.356 "raid_level": "raid5f", 00:15:07.356 "superblock": true, 00:15:07.356 "num_base_bdevs": 3, 00:15:07.356 "num_base_bdevs_discovered": 3, 00:15:07.356 "num_base_bdevs_operational": 3, 00:15:07.356 "base_bdevs_list": [ 00:15:07.356 { 00:15:07.356 "name": "NewBaseBdev", 00:15:07.356 "uuid": "46c0985f-b354-4b5e-bf7a-2987ebb7e2fa", 00:15:07.356 "is_configured": true, 00:15:07.356 "data_offset": 2048, 00:15:07.356 "data_size": 63488 00:15:07.356 }, 00:15:07.356 { 00:15:07.356 "name": "BaseBdev2", 00:15:07.356 "uuid": "c05b9567-831c-4c1b-8a47-a72b936ece03", 00:15:07.356 "is_configured": true, 00:15:07.356 "data_offset": 2048, 00:15:07.356 "data_size": 63488 00:15:07.356 }, 00:15:07.356 { 00:15:07.356 "name": "BaseBdev3", 00:15:07.356 "uuid": "4945a542-7052-4570-a17d-1a06e55749d9", 00:15:07.356 "is_configured": true, 00:15:07.356 "data_offset": 2048, 00:15:07.356 "data_size": 63488 00:15:07.356 } 00:15:07.356 ] 00:15:07.356 }' 00:15:07.356 12:07:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:07.356 12:07:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.615 12:07:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:07.615 12:07:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:07.615 12:07:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:07.615 12:07:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:07.615 12:07:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:07.615 12:07:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:07.615 12:07:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:07.615 12:07:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:07.615 12:07:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.615 12:07:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.615 [2024-11-19 12:07:10.967726] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:07.615 12:07:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.874 12:07:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:07.874 "name": "Existed_Raid", 00:15:07.874 "aliases": [ 00:15:07.874 "a8c42318-868b-466f-bace-eb275458bf40" 00:15:07.874 ], 00:15:07.874 "product_name": "Raid Volume", 00:15:07.874 "block_size": 512, 00:15:07.874 "num_blocks": 126976, 00:15:07.874 "uuid": "a8c42318-868b-466f-bace-eb275458bf40", 00:15:07.874 "assigned_rate_limits": { 00:15:07.874 "rw_ios_per_sec": 0, 00:15:07.874 "rw_mbytes_per_sec": 0, 00:15:07.874 "r_mbytes_per_sec": 0, 00:15:07.874 "w_mbytes_per_sec": 0 00:15:07.874 }, 00:15:07.874 "claimed": false, 00:15:07.874 "zoned": false, 00:15:07.874 "supported_io_types": { 00:15:07.874 "read": true, 00:15:07.874 "write": true, 00:15:07.874 "unmap": false, 00:15:07.874 "flush": false, 00:15:07.874 "reset": true, 00:15:07.874 "nvme_admin": false, 00:15:07.874 "nvme_io": false, 00:15:07.874 "nvme_io_md": false, 00:15:07.874 "write_zeroes": true, 00:15:07.874 "zcopy": false, 00:15:07.874 "get_zone_info": false, 00:15:07.874 "zone_management": false, 00:15:07.874 "zone_append": false, 00:15:07.874 "compare": false, 00:15:07.874 "compare_and_write": false, 00:15:07.874 "abort": false, 00:15:07.874 "seek_hole": false, 00:15:07.874 "seek_data": false, 00:15:07.874 "copy": false, 00:15:07.874 "nvme_iov_md": false 00:15:07.874 }, 00:15:07.874 "driver_specific": { 00:15:07.874 "raid": { 00:15:07.874 "uuid": "a8c42318-868b-466f-bace-eb275458bf40", 00:15:07.874 "strip_size_kb": 64, 00:15:07.874 "state": "online", 00:15:07.874 "raid_level": "raid5f", 00:15:07.874 "superblock": true, 00:15:07.874 "num_base_bdevs": 3, 00:15:07.874 "num_base_bdevs_discovered": 3, 00:15:07.874 "num_base_bdevs_operational": 3, 00:15:07.874 "base_bdevs_list": [ 00:15:07.874 { 00:15:07.874 "name": "NewBaseBdev", 00:15:07.874 "uuid": "46c0985f-b354-4b5e-bf7a-2987ebb7e2fa", 00:15:07.874 "is_configured": true, 00:15:07.874 "data_offset": 2048, 00:15:07.874 "data_size": 63488 00:15:07.874 }, 00:15:07.874 { 00:15:07.874 "name": "BaseBdev2", 00:15:07.874 "uuid": "c05b9567-831c-4c1b-8a47-a72b936ece03", 00:15:07.874 "is_configured": true, 00:15:07.874 "data_offset": 2048, 00:15:07.874 "data_size": 63488 00:15:07.874 }, 00:15:07.874 { 00:15:07.874 "name": "BaseBdev3", 00:15:07.874 "uuid": "4945a542-7052-4570-a17d-1a06e55749d9", 00:15:07.874 "is_configured": true, 00:15:07.874 "data_offset": 2048, 00:15:07.874 "data_size": 63488 00:15:07.874 } 00:15:07.874 ] 00:15:07.874 } 00:15:07.874 } 00:15:07.874 }' 00:15:07.874 12:07:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:07.874 12:07:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:07.874 BaseBdev2 00:15:07.874 BaseBdev3' 00:15:07.874 12:07:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:07.874 12:07:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:07.874 12:07:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:07.874 12:07:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:07.874 12:07:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:07.874 12:07:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.874 12:07:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.874 12:07:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.874 12:07:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:07.874 12:07:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:07.874 12:07:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:07.874 12:07:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:07.874 12:07:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:07.874 12:07:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.874 12:07:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.874 12:07:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.874 12:07:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:07.874 12:07:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:07.874 12:07:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:07.874 12:07:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:07.874 12:07:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.874 12:07:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.874 12:07:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:07.874 12:07:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.134 12:07:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:08.134 12:07:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:08.134 12:07:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:08.134 12:07:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.134 12:07:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.134 [2024-11-19 12:07:11.259046] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:08.134 [2024-11-19 12:07:11.259070] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:08.134 [2024-11-19 12:07:11.259152] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:08.134 [2024-11-19 12:07:11.259450] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:08.134 [2024-11-19 12:07:11.259469] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:08.134 12:07:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.134 12:07:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80452 00:15:08.134 12:07:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 80452 ']' 00:15:08.134 12:07:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 80452 00:15:08.134 12:07:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:08.134 12:07:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:08.134 12:07:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80452 00:15:08.134 12:07:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:08.134 12:07:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:08.134 12:07:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80452' 00:15:08.134 killing process with pid 80452 00:15:08.134 12:07:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 80452 00:15:08.134 [2024-11-19 12:07:11.304504] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:08.134 12:07:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 80452 00:15:08.393 [2024-11-19 12:07:11.582827] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:09.330 12:07:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:09.330 00:15:09.330 real 0m10.162s 00:15:09.330 user 0m16.185s 00:15:09.330 sys 0m1.756s 00:15:09.330 12:07:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:09.330 12:07:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.330 ************************************ 00:15:09.330 END TEST raid5f_state_function_test_sb 00:15:09.330 ************************************ 00:15:09.330 12:07:12 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:15:09.330 12:07:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:09.330 12:07:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:09.330 12:07:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:09.592 ************************************ 00:15:09.592 START TEST raid5f_superblock_test 00:15:09.592 ************************************ 00:15:09.592 12:07:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:15:09.592 12:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:15:09.592 12:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:15:09.592 12:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:09.592 12:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:09.592 12:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:09.592 12:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:09.592 12:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:09.592 12:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:09.592 12:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:09.592 12:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:09.592 12:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:09.592 12:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:09.592 12:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:09.592 12:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:15:09.592 12:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:15:09.592 12:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:15:09.592 12:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81067 00:15:09.592 12:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:09.592 12:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81067 00:15:09.592 12:07:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 81067 ']' 00:15:09.592 12:07:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:09.592 12:07:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:09.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:09.592 12:07:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:09.592 12:07:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:09.592 12:07:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.592 [2024-11-19 12:07:12.799416] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:15:09.592 [2024-11-19 12:07:12.799522] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81067 ] 00:15:09.852 [2024-11-19 12:07:12.971028] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:09.852 [2024-11-19 12:07:13.089114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:10.111 [2024-11-19 12:07:13.276245] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:10.111 [2024-11-19 12:07:13.276304] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:10.370 12:07:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:10.370 12:07:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:15:10.370 12:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:10.370 12:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:10.370 12:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:10.370 12:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:10.370 12:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:10.370 12:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:10.370 12:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:10.370 12:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:10.370 12:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:10.370 12:07:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.370 12:07:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.370 malloc1 00:15:10.370 12:07:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.370 12:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:10.370 12:07:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.370 12:07:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.370 [2024-11-19 12:07:13.657027] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:10.370 [2024-11-19 12:07:13.657151] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.370 [2024-11-19 12:07:13.657197] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:10.370 [2024-11-19 12:07:13.657228] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.370 [2024-11-19 12:07:13.659366] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.370 [2024-11-19 12:07:13.659453] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:10.370 pt1 00:15:10.370 12:07:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.370 12:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:10.370 12:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:10.370 12:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:10.370 12:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:10.370 12:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:10.370 12:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:10.370 12:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:10.370 12:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:10.370 12:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:10.370 12:07:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.370 12:07:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.370 malloc2 00:15:10.370 12:07:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.370 12:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:10.370 12:07:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.370 12:07:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.370 [2024-11-19 12:07:13.709259] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:10.370 [2024-11-19 12:07:13.709370] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.370 [2024-11-19 12:07:13.709408] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:10.370 [2024-11-19 12:07:13.709435] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.370 [2024-11-19 12:07:13.711447] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.370 [2024-11-19 12:07:13.711515] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:10.370 pt2 00:15:10.370 12:07:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.370 12:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:10.370 12:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:10.371 12:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:10.371 12:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:10.371 12:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:10.371 12:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:10.371 12:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:10.371 12:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:10.371 12:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:10.371 12:07:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.371 12:07:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.630 malloc3 00:15:10.630 12:07:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.630 12:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:10.630 12:07:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.630 12:07:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.630 [2024-11-19 12:07:13.797825] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:10.630 [2024-11-19 12:07:13.797938] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.630 [2024-11-19 12:07:13.797978] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:10.630 [2024-11-19 12:07:13.798030] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.630 [2024-11-19 12:07:13.800085] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.630 [2024-11-19 12:07:13.800168] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:10.630 pt3 00:15:10.630 12:07:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.630 12:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:10.630 12:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:10.630 12:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:15:10.630 12:07:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.630 12:07:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.630 [2024-11-19 12:07:13.809856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:10.630 [2024-11-19 12:07:13.811677] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:10.630 [2024-11-19 12:07:13.811793] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:10.630 [2024-11-19 12:07:13.812034] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:10.630 [2024-11-19 12:07:13.812095] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:10.630 [2024-11-19 12:07:13.812386] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:10.630 [2024-11-19 12:07:13.817807] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:10.630 [2024-11-19 12:07:13.817859] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:10.630 [2024-11-19 12:07:13.818094] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:10.630 12:07:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.630 12:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:10.630 12:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:10.630 12:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:10.630 12:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:10.630 12:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:10.630 12:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:10.630 12:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.630 12:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.630 12:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.630 12:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.630 12:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.630 12:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.630 12:07:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.630 12:07:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.630 12:07:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.630 12:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.630 "name": "raid_bdev1", 00:15:10.630 "uuid": "fdeb9295-39b0-40e9-9ec5-b55565fafdd7", 00:15:10.630 "strip_size_kb": 64, 00:15:10.630 "state": "online", 00:15:10.630 "raid_level": "raid5f", 00:15:10.630 "superblock": true, 00:15:10.630 "num_base_bdevs": 3, 00:15:10.630 "num_base_bdevs_discovered": 3, 00:15:10.630 "num_base_bdevs_operational": 3, 00:15:10.630 "base_bdevs_list": [ 00:15:10.630 { 00:15:10.630 "name": "pt1", 00:15:10.630 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:10.630 "is_configured": true, 00:15:10.630 "data_offset": 2048, 00:15:10.630 "data_size": 63488 00:15:10.630 }, 00:15:10.630 { 00:15:10.630 "name": "pt2", 00:15:10.630 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:10.630 "is_configured": true, 00:15:10.630 "data_offset": 2048, 00:15:10.630 "data_size": 63488 00:15:10.630 }, 00:15:10.630 { 00:15:10.630 "name": "pt3", 00:15:10.630 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:10.630 "is_configured": true, 00:15:10.630 "data_offset": 2048, 00:15:10.630 "data_size": 63488 00:15:10.630 } 00:15:10.630 ] 00:15:10.630 }' 00:15:10.630 12:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.630 12:07:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.889 12:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:10.889 12:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:10.889 12:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:10.889 12:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:10.889 12:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:10.889 12:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:10.889 12:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:10.889 12:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.889 12:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.889 12:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:10.889 [2024-11-19 12:07:14.223771] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:10.889 12:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.148 12:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:11.148 "name": "raid_bdev1", 00:15:11.148 "aliases": [ 00:15:11.148 "fdeb9295-39b0-40e9-9ec5-b55565fafdd7" 00:15:11.148 ], 00:15:11.148 "product_name": "Raid Volume", 00:15:11.148 "block_size": 512, 00:15:11.148 "num_blocks": 126976, 00:15:11.148 "uuid": "fdeb9295-39b0-40e9-9ec5-b55565fafdd7", 00:15:11.148 "assigned_rate_limits": { 00:15:11.148 "rw_ios_per_sec": 0, 00:15:11.148 "rw_mbytes_per_sec": 0, 00:15:11.148 "r_mbytes_per_sec": 0, 00:15:11.148 "w_mbytes_per_sec": 0 00:15:11.148 }, 00:15:11.148 "claimed": false, 00:15:11.148 "zoned": false, 00:15:11.148 "supported_io_types": { 00:15:11.148 "read": true, 00:15:11.148 "write": true, 00:15:11.148 "unmap": false, 00:15:11.148 "flush": false, 00:15:11.148 "reset": true, 00:15:11.148 "nvme_admin": false, 00:15:11.148 "nvme_io": false, 00:15:11.148 "nvme_io_md": false, 00:15:11.148 "write_zeroes": true, 00:15:11.148 "zcopy": false, 00:15:11.148 "get_zone_info": false, 00:15:11.148 "zone_management": false, 00:15:11.148 "zone_append": false, 00:15:11.148 "compare": false, 00:15:11.148 "compare_and_write": false, 00:15:11.148 "abort": false, 00:15:11.148 "seek_hole": false, 00:15:11.148 "seek_data": false, 00:15:11.148 "copy": false, 00:15:11.148 "nvme_iov_md": false 00:15:11.148 }, 00:15:11.148 "driver_specific": { 00:15:11.148 "raid": { 00:15:11.148 "uuid": "fdeb9295-39b0-40e9-9ec5-b55565fafdd7", 00:15:11.148 "strip_size_kb": 64, 00:15:11.148 "state": "online", 00:15:11.148 "raid_level": "raid5f", 00:15:11.148 "superblock": true, 00:15:11.148 "num_base_bdevs": 3, 00:15:11.148 "num_base_bdevs_discovered": 3, 00:15:11.148 "num_base_bdevs_operational": 3, 00:15:11.148 "base_bdevs_list": [ 00:15:11.148 { 00:15:11.148 "name": "pt1", 00:15:11.148 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:11.148 "is_configured": true, 00:15:11.148 "data_offset": 2048, 00:15:11.148 "data_size": 63488 00:15:11.148 }, 00:15:11.148 { 00:15:11.148 "name": "pt2", 00:15:11.148 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:11.148 "is_configured": true, 00:15:11.148 "data_offset": 2048, 00:15:11.148 "data_size": 63488 00:15:11.148 }, 00:15:11.148 { 00:15:11.148 "name": "pt3", 00:15:11.148 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:11.148 "is_configured": true, 00:15:11.148 "data_offset": 2048, 00:15:11.148 "data_size": 63488 00:15:11.148 } 00:15:11.148 ] 00:15:11.148 } 00:15:11.148 } 00:15:11.148 }' 00:15:11.148 12:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:11.148 12:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:11.148 pt2 00:15:11.148 pt3' 00:15:11.148 12:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:11.148 12:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:11.148 12:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:11.148 12:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:11.148 12:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.148 12:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.148 12:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:11.148 12:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.148 12:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:11.148 12:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:11.148 12:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:11.148 12:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:11.149 12:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:11.149 12:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.149 12:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.149 12:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.149 12:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:11.149 12:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:11.149 12:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:11.149 12:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:11.149 12:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.149 12:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.149 12:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:11.149 12:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.149 12:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:11.149 12:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:11.149 12:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:11.149 12:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.149 12:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.149 12:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:11.149 [2024-11-19 12:07:14.511262] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:11.408 12:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.408 12:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=fdeb9295-39b0-40e9-9ec5-b55565fafdd7 00:15:11.408 12:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z fdeb9295-39b0-40e9-9ec5-b55565fafdd7 ']' 00:15:11.408 12:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:11.408 12:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.408 12:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.408 [2024-11-19 12:07:14.559020] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:11.408 [2024-11-19 12:07:14.559081] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:11.408 [2024-11-19 12:07:14.559174] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:11.408 [2024-11-19 12:07:14.559282] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:11.408 [2024-11-19 12:07:14.559353] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:11.408 12:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.408 12:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.408 12:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:11.408 12:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.408 12:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.408 12:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.408 12:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:11.408 12:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:11.408 12:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:11.408 12:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:11.408 12:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.408 12:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.408 12:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.408 12:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:11.408 12:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:11.408 12:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.408 12:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.408 12:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.408 12:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:11.408 12:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:11.408 12:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.408 12:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.408 12:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.408 12:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:11.408 12:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.408 12:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.408 12:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:11.408 12:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.408 12:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:11.409 12:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:11.409 12:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:15:11.409 12:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:11.409 12:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:11.409 12:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:11.409 12:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:11.409 12:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:11.409 12:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:11.409 12:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.409 12:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.409 [2024-11-19 12:07:14.710792] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:11.409 [2024-11-19 12:07:14.712675] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:11.409 [2024-11-19 12:07:14.712780] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:11.409 [2024-11-19 12:07:14.712849] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:11.409 [2024-11-19 12:07:14.712930] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:11.409 [2024-11-19 12:07:14.712979] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:11.409 [2024-11-19 12:07:14.713084] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:11.409 [2024-11-19 12:07:14.713093] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:15:11.409 request: 00:15:11.409 { 00:15:11.409 "name": "raid_bdev1", 00:15:11.409 "raid_level": "raid5f", 00:15:11.409 "base_bdevs": [ 00:15:11.409 "malloc1", 00:15:11.409 "malloc2", 00:15:11.409 "malloc3" 00:15:11.409 ], 00:15:11.409 "strip_size_kb": 64, 00:15:11.409 "superblock": false, 00:15:11.409 "method": "bdev_raid_create", 00:15:11.409 "req_id": 1 00:15:11.409 } 00:15:11.409 Got JSON-RPC error response 00:15:11.409 response: 00:15:11.409 { 00:15:11.409 "code": -17, 00:15:11.409 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:11.409 } 00:15:11.409 12:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:11.409 12:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:15:11.409 12:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:11.409 12:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:11.409 12:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:11.409 12:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.409 12:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:11.409 12:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.409 12:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.409 12:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.409 12:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:11.409 12:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:11.409 12:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:11.409 12:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.409 12:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.409 [2024-11-19 12:07:14.778631] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:11.409 [2024-11-19 12:07:14.778727] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:11.409 [2024-11-19 12:07:14.778761] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:11.409 [2024-11-19 12:07:14.778788] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:11.409 [2024-11-19 12:07:14.780938] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:11.409 [2024-11-19 12:07:14.781015] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:11.409 [2024-11-19 12:07:14.781106] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:11.409 [2024-11-19 12:07:14.781181] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:11.667 pt1 00:15:11.667 12:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.667 12:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:11.667 12:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:11.667 12:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:11.667 12:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:11.667 12:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:11.668 12:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:11.668 12:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.668 12:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.668 12:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.668 12:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.668 12:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.668 12:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.668 12:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.668 12:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.668 12:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.668 12:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.668 "name": "raid_bdev1", 00:15:11.668 "uuid": "fdeb9295-39b0-40e9-9ec5-b55565fafdd7", 00:15:11.668 "strip_size_kb": 64, 00:15:11.668 "state": "configuring", 00:15:11.668 "raid_level": "raid5f", 00:15:11.668 "superblock": true, 00:15:11.668 "num_base_bdevs": 3, 00:15:11.668 "num_base_bdevs_discovered": 1, 00:15:11.668 "num_base_bdevs_operational": 3, 00:15:11.668 "base_bdevs_list": [ 00:15:11.668 { 00:15:11.668 "name": "pt1", 00:15:11.668 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:11.668 "is_configured": true, 00:15:11.668 "data_offset": 2048, 00:15:11.668 "data_size": 63488 00:15:11.668 }, 00:15:11.668 { 00:15:11.668 "name": null, 00:15:11.668 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:11.668 "is_configured": false, 00:15:11.668 "data_offset": 2048, 00:15:11.668 "data_size": 63488 00:15:11.668 }, 00:15:11.668 { 00:15:11.668 "name": null, 00:15:11.668 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:11.668 "is_configured": false, 00:15:11.668 "data_offset": 2048, 00:15:11.668 "data_size": 63488 00:15:11.668 } 00:15:11.668 ] 00:15:11.668 }' 00:15:11.668 12:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.668 12:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.931 12:07:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:15:11.931 12:07:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:11.931 12:07:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.931 12:07:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.931 [2024-11-19 12:07:15.177964] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:11.931 [2024-11-19 12:07:15.178095] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:11.931 [2024-11-19 12:07:15.178132] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:11.931 [2024-11-19 12:07:15.178160] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:11.931 [2024-11-19 12:07:15.178575] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:11.931 [2024-11-19 12:07:15.178636] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:11.931 [2024-11-19 12:07:15.178740] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:11.931 [2024-11-19 12:07:15.178788] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:11.931 pt2 00:15:11.931 12:07:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.931 12:07:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:11.931 12:07:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.931 12:07:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.931 [2024-11-19 12:07:15.189956] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:11.931 12:07:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.931 12:07:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:11.931 12:07:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:11.931 12:07:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:11.931 12:07:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:11.931 12:07:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:11.931 12:07:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:11.931 12:07:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.931 12:07:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.931 12:07:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.931 12:07:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.931 12:07:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.931 12:07:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.931 12:07:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.931 12:07:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.931 12:07:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.931 12:07:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.931 "name": "raid_bdev1", 00:15:11.931 "uuid": "fdeb9295-39b0-40e9-9ec5-b55565fafdd7", 00:15:11.931 "strip_size_kb": 64, 00:15:11.931 "state": "configuring", 00:15:11.931 "raid_level": "raid5f", 00:15:11.931 "superblock": true, 00:15:11.931 "num_base_bdevs": 3, 00:15:11.931 "num_base_bdevs_discovered": 1, 00:15:11.931 "num_base_bdevs_operational": 3, 00:15:11.931 "base_bdevs_list": [ 00:15:11.931 { 00:15:11.932 "name": "pt1", 00:15:11.932 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:11.932 "is_configured": true, 00:15:11.932 "data_offset": 2048, 00:15:11.932 "data_size": 63488 00:15:11.932 }, 00:15:11.932 { 00:15:11.932 "name": null, 00:15:11.932 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:11.932 "is_configured": false, 00:15:11.932 "data_offset": 0, 00:15:11.932 "data_size": 63488 00:15:11.932 }, 00:15:11.932 { 00:15:11.932 "name": null, 00:15:11.932 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:11.932 "is_configured": false, 00:15:11.932 "data_offset": 2048, 00:15:11.932 "data_size": 63488 00:15:11.932 } 00:15:11.932 ] 00:15:11.932 }' 00:15:11.932 12:07:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.932 12:07:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.503 12:07:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:12.503 12:07:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:12.503 12:07:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:12.503 12:07:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.503 12:07:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.503 [2024-11-19 12:07:15.649147] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:12.503 [2024-11-19 12:07:15.649253] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:12.503 [2024-11-19 12:07:15.649286] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:15:12.503 [2024-11-19 12:07:15.649349] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:12.503 [2024-11-19 12:07:15.649808] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:12.503 [2024-11-19 12:07:15.649867] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:12.503 [2024-11-19 12:07:15.649973] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:12.503 [2024-11-19 12:07:15.650040] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:12.503 pt2 00:15:12.503 12:07:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.503 12:07:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:12.503 12:07:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:12.503 12:07:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:12.503 12:07:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.504 12:07:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.504 [2024-11-19 12:07:15.661113] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:12.504 [2024-11-19 12:07:15.661195] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:12.504 [2024-11-19 12:07:15.661223] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:12.504 [2024-11-19 12:07:15.661251] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:12.504 [2024-11-19 12:07:15.661613] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:12.504 [2024-11-19 12:07:15.661672] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:12.504 [2024-11-19 12:07:15.661757] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:12.504 [2024-11-19 12:07:15.661801] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:12.504 [2024-11-19 12:07:15.661952] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:12.504 [2024-11-19 12:07:15.662001] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:12.504 [2024-11-19 12:07:15.662240] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:12.504 [2024-11-19 12:07:15.667378] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:12.504 [2024-11-19 12:07:15.667433] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:12.504 [2024-11-19 12:07:15.667639] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:12.504 pt3 00:15:12.504 12:07:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.504 12:07:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:12.504 12:07:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:12.504 12:07:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:12.504 12:07:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:12.504 12:07:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:12.504 12:07:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:12.504 12:07:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:12.504 12:07:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:12.504 12:07:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.504 12:07:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.504 12:07:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.504 12:07:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.504 12:07:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.504 12:07:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.504 12:07:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.504 12:07:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.504 12:07:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.504 12:07:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.504 "name": "raid_bdev1", 00:15:12.504 "uuid": "fdeb9295-39b0-40e9-9ec5-b55565fafdd7", 00:15:12.504 "strip_size_kb": 64, 00:15:12.504 "state": "online", 00:15:12.504 "raid_level": "raid5f", 00:15:12.504 "superblock": true, 00:15:12.504 "num_base_bdevs": 3, 00:15:12.504 "num_base_bdevs_discovered": 3, 00:15:12.504 "num_base_bdevs_operational": 3, 00:15:12.504 "base_bdevs_list": [ 00:15:12.504 { 00:15:12.504 "name": "pt1", 00:15:12.504 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:12.504 "is_configured": true, 00:15:12.504 "data_offset": 2048, 00:15:12.504 "data_size": 63488 00:15:12.504 }, 00:15:12.504 { 00:15:12.504 "name": "pt2", 00:15:12.504 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:12.504 "is_configured": true, 00:15:12.504 "data_offset": 2048, 00:15:12.504 "data_size": 63488 00:15:12.504 }, 00:15:12.504 { 00:15:12.504 "name": "pt3", 00:15:12.504 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:12.504 "is_configured": true, 00:15:12.504 "data_offset": 2048, 00:15:12.504 "data_size": 63488 00:15:12.504 } 00:15:12.504 ] 00:15:12.504 }' 00:15:12.504 12:07:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.504 12:07:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.762 12:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:12.762 12:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:12.762 12:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:12.762 12:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:12.762 12:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:12.762 12:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:12.762 12:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:12.762 12:07:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.762 12:07:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.762 12:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:13.019 [2024-11-19 12:07:16.137273] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:13.019 12:07:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.019 12:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:13.019 "name": "raid_bdev1", 00:15:13.019 "aliases": [ 00:15:13.019 "fdeb9295-39b0-40e9-9ec5-b55565fafdd7" 00:15:13.019 ], 00:15:13.019 "product_name": "Raid Volume", 00:15:13.020 "block_size": 512, 00:15:13.020 "num_blocks": 126976, 00:15:13.020 "uuid": "fdeb9295-39b0-40e9-9ec5-b55565fafdd7", 00:15:13.020 "assigned_rate_limits": { 00:15:13.020 "rw_ios_per_sec": 0, 00:15:13.020 "rw_mbytes_per_sec": 0, 00:15:13.020 "r_mbytes_per_sec": 0, 00:15:13.020 "w_mbytes_per_sec": 0 00:15:13.020 }, 00:15:13.020 "claimed": false, 00:15:13.020 "zoned": false, 00:15:13.020 "supported_io_types": { 00:15:13.020 "read": true, 00:15:13.020 "write": true, 00:15:13.020 "unmap": false, 00:15:13.020 "flush": false, 00:15:13.020 "reset": true, 00:15:13.020 "nvme_admin": false, 00:15:13.020 "nvme_io": false, 00:15:13.020 "nvme_io_md": false, 00:15:13.020 "write_zeroes": true, 00:15:13.020 "zcopy": false, 00:15:13.020 "get_zone_info": false, 00:15:13.020 "zone_management": false, 00:15:13.020 "zone_append": false, 00:15:13.020 "compare": false, 00:15:13.020 "compare_and_write": false, 00:15:13.020 "abort": false, 00:15:13.020 "seek_hole": false, 00:15:13.020 "seek_data": false, 00:15:13.020 "copy": false, 00:15:13.020 "nvme_iov_md": false 00:15:13.020 }, 00:15:13.020 "driver_specific": { 00:15:13.020 "raid": { 00:15:13.020 "uuid": "fdeb9295-39b0-40e9-9ec5-b55565fafdd7", 00:15:13.020 "strip_size_kb": 64, 00:15:13.020 "state": "online", 00:15:13.020 "raid_level": "raid5f", 00:15:13.020 "superblock": true, 00:15:13.020 "num_base_bdevs": 3, 00:15:13.020 "num_base_bdevs_discovered": 3, 00:15:13.020 "num_base_bdevs_operational": 3, 00:15:13.020 "base_bdevs_list": [ 00:15:13.020 { 00:15:13.020 "name": "pt1", 00:15:13.020 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:13.020 "is_configured": true, 00:15:13.020 "data_offset": 2048, 00:15:13.020 "data_size": 63488 00:15:13.020 }, 00:15:13.020 { 00:15:13.020 "name": "pt2", 00:15:13.020 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:13.020 "is_configured": true, 00:15:13.020 "data_offset": 2048, 00:15:13.020 "data_size": 63488 00:15:13.020 }, 00:15:13.020 { 00:15:13.020 "name": "pt3", 00:15:13.020 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:13.020 "is_configured": true, 00:15:13.020 "data_offset": 2048, 00:15:13.020 "data_size": 63488 00:15:13.020 } 00:15:13.020 ] 00:15:13.020 } 00:15:13.020 } 00:15:13.020 }' 00:15:13.020 12:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:13.020 12:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:13.020 pt2 00:15:13.020 pt3' 00:15:13.020 12:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:13.020 12:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:13.020 12:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:13.020 12:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:13.020 12:07:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.020 12:07:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.020 12:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:13.020 12:07:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.020 12:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:13.020 12:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:13.020 12:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:13.020 12:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:13.020 12:07:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.020 12:07:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.020 12:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:13.020 12:07:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.020 12:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:13.020 12:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:13.020 12:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:13.020 12:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:13.020 12:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:13.020 12:07:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.020 12:07:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.279 12:07:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.279 12:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:13.279 12:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:13.279 12:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:13.279 12:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:13.279 12:07:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.279 12:07:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.279 [2024-11-19 12:07:16.436699] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:13.279 12:07:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.279 12:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' fdeb9295-39b0-40e9-9ec5-b55565fafdd7 '!=' fdeb9295-39b0-40e9-9ec5-b55565fafdd7 ']' 00:15:13.279 12:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:15:13.279 12:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:13.279 12:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:13.279 12:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:13.279 12:07:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.279 12:07:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.279 [2024-11-19 12:07:16.480505] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:13.279 12:07:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.279 12:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:13.279 12:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:13.279 12:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:13.279 12:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:13.279 12:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:13.279 12:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:13.279 12:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.279 12:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.279 12:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.279 12:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.279 12:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.279 12:07:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.279 12:07:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.279 12:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.279 12:07:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.279 12:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:13.279 "name": "raid_bdev1", 00:15:13.279 "uuid": "fdeb9295-39b0-40e9-9ec5-b55565fafdd7", 00:15:13.279 "strip_size_kb": 64, 00:15:13.279 "state": "online", 00:15:13.279 "raid_level": "raid5f", 00:15:13.279 "superblock": true, 00:15:13.279 "num_base_bdevs": 3, 00:15:13.279 "num_base_bdevs_discovered": 2, 00:15:13.279 "num_base_bdevs_operational": 2, 00:15:13.279 "base_bdevs_list": [ 00:15:13.279 { 00:15:13.279 "name": null, 00:15:13.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.279 "is_configured": false, 00:15:13.279 "data_offset": 0, 00:15:13.280 "data_size": 63488 00:15:13.280 }, 00:15:13.280 { 00:15:13.280 "name": "pt2", 00:15:13.280 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:13.280 "is_configured": true, 00:15:13.280 "data_offset": 2048, 00:15:13.280 "data_size": 63488 00:15:13.280 }, 00:15:13.280 { 00:15:13.280 "name": "pt3", 00:15:13.280 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:13.280 "is_configured": true, 00:15:13.280 "data_offset": 2048, 00:15:13.280 "data_size": 63488 00:15:13.280 } 00:15:13.280 ] 00:15:13.280 }' 00:15:13.280 12:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:13.280 12:07:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.539 12:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:13.539 12:07:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.539 12:07:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.539 [2024-11-19 12:07:16.895722] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:13.539 [2024-11-19 12:07:16.895750] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:13.539 [2024-11-19 12:07:16.895817] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:13.539 [2024-11-19 12:07:16.895871] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:13.539 [2024-11-19 12:07:16.895884] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:13.539 12:07:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.539 12:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.539 12:07:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.539 12:07:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.539 12:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:13.539 12:07:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.799 12:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:13.799 12:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:13.799 12:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:13.799 12:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:13.799 12:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:13.799 12:07:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.799 12:07:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.799 12:07:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.799 12:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:13.799 12:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:13.799 12:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:15:13.799 12:07:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.799 12:07:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.799 12:07:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.799 12:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:13.799 12:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:13.799 12:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:13.799 12:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:13.799 12:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:13.799 12:07:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.799 12:07:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.799 [2024-11-19 12:07:16.979569] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:13.799 [2024-11-19 12:07:16.979633] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:13.799 [2024-11-19 12:07:16.979647] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:15:13.799 [2024-11-19 12:07:16.979657] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:13.799 [2024-11-19 12:07:16.981697] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:13.799 [2024-11-19 12:07:16.981736] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:13.799 [2024-11-19 12:07:16.981804] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:13.799 [2024-11-19 12:07:16.981852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:13.799 pt2 00:15:13.799 12:07:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.799 12:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:15:13.799 12:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:13.799 12:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:13.799 12:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:13.799 12:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:13.799 12:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:13.799 12:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.799 12:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.799 12:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.799 12:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.799 12:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.799 12:07:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.799 12:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.799 12:07:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.799 12:07:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.799 12:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:13.799 "name": "raid_bdev1", 00:15:13.799 "uuid": "fdeb9295-39b0-40e9-9ec5-b55565fafdd7", 00:15:13.799 "strip_size_kb": 64, 00:15:13.799 "state": "configuring", 00:15:13.799 "raid_level": "raid5f", 00:15:13.799 "superblock": true, 00:15:13.799 "num_base_bdevs": 3, 00:15:13.799 "num_base_bdevs_discovered": 1, 00:15:13.799 "num_base_bdevs_operational": 2, 00:15:13.799 "base_bdevs_list": [ 00:15:13.799 { 00:15:13.799 "name": null, 00:15:13.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.799 "is_configured": false, 00:15:13.799 "data_offset": 2048, 00:15:13.799 "data_size": 63488 00:15:13.799 }, 00:15:13.799 { 00:15:13.799 "name": "pt2", 00:15:13.799 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:13.799 "is_configured": true, 00:15:13.799 "data_offset": 2048, 00:15:13.799 "data_size": 63488 00:15:13.799 }, 00:15:13.799 { 00:15:13.799 "name": null, 00:15:13.799 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:13.799 "is_configured": false, 00:15:13.799 "data_offset": 2048, 00:15:13.799 "data_size": 63488 00:15:13.799 } 00:15:13.799 ] 00:15:13.799 }' 00:15:13.799 12:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:13.799 12:07:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.057 12:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:14.057 12:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:14.057 12:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:15:14.057 12:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:14.057 12:07:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.057 12:07:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.057 [2024-11-19 12:07:17.410938] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:14.057 [2024-11-19 12:07:17.411004] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:14.057 [2024-11-19 12:07:17.411025] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:14.057 [2024-11-19 12:07:17.411035] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:14.057 [2024-11-19 12:07:17.411490] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:14.057 [2024-11-19 12:07:17.411524] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:14.057 [2024-11-19 12:07:17.411600] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:14.057 [2024-11-19 12:07:17.411637] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:14.057 [2024-11-19 12:07:17.411760] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:14.057 [2024-11-19 12:07:17.411776] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:14.057 [2024-11-19 12:07:17.412022] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:14.057 [2024-11-19 12:07:17.417116] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:14.057 [2024-11-19 12:07:17.417137] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:15:14.057 [2024-11-19 12:07:17.417429] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:14.057 pt3 00:15:14.058 12:07:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.058 12:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:14.058 12:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:14.058 12:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:14.058 12:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:14.058 12:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:14.058 12:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:14.058 12:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.058 12:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.058 12:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.058 12:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.058 12:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.058 12:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.058 12:07:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.058 12:07:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.315 12:07:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.315 12:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.315 "name": "raid_bdev1", 00:15:14.315 "uuid": "fdeb9295-39b0-40e9-9ec5-b55565fafdd7", 00:15:14.315 "strip_size_kb": 64, 00:15:14.315 "state": "online", 00:15:14.315 "raid_level": "raid5f", 00:15:14.315 "superblock": true, 00:15:14.315 "num_base_bdevs": 3, 00:15:14.315 "num_base_bdevs_discovered": 2, 00:15:14.315 "num_base_bdevs_operational": 2, 00:15:14.315 "base_bdevs_list": [ 00:15:14.315 { 00:15:14.315 "name": null, 00:15:14.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.315 "is_configured": false, 00:15:14.315 "data_offset": 2048, 00:15:14.315 "data_size": 63488 00:15:14.315 }, 00:15:14.315 { 00:15:14.315 "name": "pt2", 00:15:14.315 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:14.315 "is_configured": true, 00:15:14.315 "data_offset": 2048, 00:15:14.315 "data_size": 63488 00:15:14.315 }, 00:15:14.315 { 00:15:14.315 "name": "pt3", 00:15:14.315 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:14.315 "is_configured": true, 00:15:14.315 "data_offset": 2048, 00:15:14.315 "data_size": 63488 00:15:14.315 } 00:15:14.315 ] 00:15:14.315 }' 00:15:14.315 12:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.315 12:07:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.574 12:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:14.574 12:07:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.574 12:07:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.574 [2024-11-19 12:07:17.851235] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:14.574 [2024-11-19 12:07:17.851269] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:14.574 [2024-11-19 12:07:17.851342] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:14.574 [2024-11-19 12:07:17.851406] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:14.574 [2024-11-19 12:07:17.851422] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:15:14.574 12:07:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.574 12:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.574 12:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:14.574 12:07:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.574 12:07:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.574 12:07:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.574 12:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:14.574 12:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:14.574 12:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:15:14.574 12:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:15:14.575 12:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:15:14.575 12:07:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.575 12:07:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.575 12:07:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.575 12:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:14.575 12:07:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.575 12:07:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.575 [2024-11-19 12:07:17.923220] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:14.575 [2024-11-19 12:07:17.923271] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:14.575 [2024-11-19 12:07:17.923287] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:14.575 [2024-11-19 12:07:17.923297] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:14.575 [2024-11-19 12:07:17.925437] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:14.575 [2024-11-19 12:07:17.925470] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:14.575 [2024-11-19 12:07:17.925547] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:14.575 [2024-11-19 12:07:17.925589] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:14.575 [2024-11-19 12:07:17.925734] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:14.575 [2024-11-19 12:07:17.925752] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:14.575 [2024-11-19 12:07:17.925767] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:15:14.575 [2024-11-19 12:07:17.925835] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:14.575 pt1 00:15:14.575 12:07:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.575 12:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:15:14.575 12:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:15:14.575 12:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:14.575 12:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:14.575 12:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:14.575 12:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:14.575 12:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:14.575 12:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.575 12:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.575 12:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.575 12:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.575 12:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.575 12:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.575 12:07:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.575 12:07:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.833 12:07:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.833 12:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.833 "name": "raid_bdev1", 00:15:14.833 "uuid": "fdeb9295-39b0-40e9-9ec5-b55565fafdd7", 00:15:14.833 "strip_size_kb": 64, 00:15:14.833 "state": "configuring", 00:15:14.833 "raid_level": "raid5f", 00:15:14.833 "superblock": true, 00:15:14.833 "num_base_bdevs": 3, 00:15:14.833 "num_base_bdevs_discovered": 1, 00:15:14.833 "num_base_bdevs_operational": 2, 00:15:14.833 "base_bdevs_list": [ 00:15:14.833 { 00:15:14.833 "name": null, 00:15:14.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.833 "is_configured": false, 00:15:14.833 "data_offset": 2048, 00:15:14.833 "data_size": 63488 00:15:14.833 }, 00:15:14.833 { 00:15:14.833 "name": "pt2", 00:15:14.833 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:14.833 "is_configured": true, 00:15:14.833 "data_offset": 2048, 00:15:14.833 "data_size": 63488 00:15:14.833 }, 00:15:14.833 { 00:15:14.833 "name": null, 00:15:14.833 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:14.833 "is_configured": false, 00:15:14.833 "data_offset": 2048, 00:15:14.833 "data_size": 63488 00:15:14.833 } 00:15:14.833 ] 00:15:14.833 }' 00:15:14.833 12:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.833 12:07:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.091 12:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:15:15.091 12:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:15.092 12:07:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.092 12:07:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.092 12:07:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.092 12:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:15:15.092 12:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:15.092 12:07:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.092 12:07:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.092 [2024-11-19 12:07:18.458282] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:15.092 [2024-11-19 12:07:18.458335] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:15.092 [2024-11-19 12:07:18.458353] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:15:15.092 [2024-11-19 12:07:18.458362] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:15.092 [2024-11-19 12:07:18.458801] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:15.092 [2024-11-19 12:07:18.458827] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:15.092 [2024-11-19 12:07:18.458904] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:15.092 [2024-11-19 12:07:18.458925] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:15.092 [2024-11-19 12:07:18.459064] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:15:15.092 [2024-11-19 12:07:18.459073] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:15.092 [2024-11-19 12:07:18.459320] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:15.092 [2024-11-19 12:07:18.465351] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:15:15.092 [2024-11-19 12:07:18.465379] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:15:15.092 [2024-11-19 12:07:18.465648] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:15.092 pt3 00:15:15.350 12:07:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.350 12:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:15.350 12:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:15.350 12:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:15.350 12:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:15.350 12:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:15.350 12:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:15.350 12:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:15.350 12:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:15.350 12:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:15.350 12:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.350 12:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.350 12:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.350 12:07:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.350 12:07:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.350 12:07:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.350 12:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.350 "name": "raid_bdev1", 00:15:15.350 "uuid": "fdeb9295-39b0-40e9-9ec5-b55565fafdd7", 00:15:15.350 "strip_size_kb": 64, 00:15:15.350 "state": "online", 00:15:15.350 "raid_level": "raid5f", 00:15:15.350 "superblock": true, 00:15:15.350 "num_base_bdevs": 3, 00:15:15.350 "num_base_bdevs_discovered": 2, 00:15:15.350 "num_base_bdevs_operational": 2, 00:15:15.350 "base_bdevs_list": [ 00:15:15.350 { 00:15:15.350 "name": null, 00:15:15.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.350 "is_configured": false, 00:15:15.350 "data_offset": 2048, 00:15:15.350 "data_size": 63488 00:15:15.350 }, 00:15:15.350 { 00:15:15.350 "name": "pt2", 00:15:15.350 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:15.350 "is_configured": true, 00:15:15.350 "data_offset": 2048, 00:15:15.350 "data_size": 63488 00:15:15.350 }, 00:15:15.350 { 00:15:15.350 "name": "pt3", 00:15:15.350 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:15.350 "is_configured": true, 00:15:15.350 "data_offset": 2048, 00:15:15.350 "data_size": 63488 00:15:15.350 } 00:15:15.350 ] 00:15:15.350 }' 00:15:15.350 12:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.350 12:07:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.609 12:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:15.609 12:07:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.609 12:07:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.609 12:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:15.609 12:07:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.609 12:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:15.609 12:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:15.609 12:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:15.609 12:07:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.609 12:07:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.609 [2024-11-19 12:07:18.980373] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:15.869 12:07:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.869 12:07:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' fdeb9295-39b0-40e9-9ec5-b55565fafdd7 '!=' fdeb9295-39b0-40e9-9ec5-b55565fafdd7 ']' 00:15:15.870 12:07:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81067 00:15:15.870 12:07:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 81067 ']' 00:15:15.870 12:07:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 81067 00:15:15.870 12:07:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:15:15.870 12:07:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:15.870 12:07:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81067 00:15:15.870 12:07:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:15.870 12:07:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:15.870 killing process with pid 81067 00:15:15.870 12:07:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81067' 00:15:15.870 12:07:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 81067 00:15:15.870 [2024-11-19 12:07:19.056119] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:15.870 [2024-11-19 12:07:19.056206] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:15.870 [2024-11-19 12:07:19.056263] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:15.870 [2024-11-19 12:07:19.056274] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:15:15.870 12:07:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 81067 00:15:16.129 [2024-11-19 12:07:19.342033] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:17.068 12:07:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:17.068 00:15:17.068 real 0m7.676s 00:15:17.068 user 0m12.128s 00:15:17.068 sys 0m1.263s 00:15:17.068 12:07:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:17.068 12:07:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.068 ************************************ 00:15:17.068 END TEST raid5f_superblock_test 00:15:17.068 ************************************ 00:15:17.328 12:07:20 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:15:17.328 12:07:20 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:15:17.328 12:07:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:17.328 12:07:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:17.328 12:07:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:17.328 ************************************ 00:15:17.328 START TEST raid5f_rebuild_test 00:15:17.328 ************************************ 00:15:17.328 12:07:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:15:17.328 12:07:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:17.328 12:07:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:15:17.328 12:07:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:17.328 12:07:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:17.328 12:07:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:17.328 12:07:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:17.328 12:07:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:17.328 12:07:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:17.328 12:07:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:17.328 12:07:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:17.328 12:07:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:17.328 12:07:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:17.328 12:07:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:17.328 12:07:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:17.328 12:07:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:17.328 12:07:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:17.328 12:07:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:17.328 12:07:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:17.328 12:07:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:17.328 12:07:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:17.328 12:07:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:17.328 12:07:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:17.328 12:07:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:17.328 12:07:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:17.328 12:07:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:17.328 12:07:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:17.328 12:07:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:17.328 12:07:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:17.328 12:07:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=81512 00:15:17.328 12:07:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:17.328 12:07:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 81512 00:15:17.328 12:07:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 81512 ']' 00:15:17.328 12:07:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:17.328 12:07:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:17.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:17.328 12:07:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:17.328 12:07:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:17.328 12:07:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.328 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:17.328 Zero copy mechanism will not be used. 00:15:17.328 [2024-11-19 12:07:20.556889] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:15:17.328 [2024-11-19 12:07:20.557023] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81512 ] 00:15:17.587 [2024-11-19 12:07:20.730666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:17.587 [2024-11-19 12:07:20.844092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:17.847 [2024-11-19 12:07:21.032435] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:17.847 [2024-11-19 12:07:21.032473] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:18.107 12:07:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:18.107 12:07:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:15:18.107 12:07:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:18.107 12:07:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:18.107 12:07:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.107 12:07:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.107 BaseBdev1_malloc 00:15:18.107 12:07:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.107 12:07:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:18.107 12:07:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.107 12:07:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.107 [2024-11-19 12:07:21.420795] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:18.107 [2024-11-19 12:07:21.420875] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:18.107 [2024-11-19 12:07:21.420899] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:18.107 [2024-11-19 12:07:21.420910] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:18.107 [2024-11-19 12:07:21.422933] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:18.107 [2024-11-19 12:07:21.422972] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:18.107 BaseBdev1 00:15:18.107 12:07:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.107 12:07:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:18.107 12:07:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:18.107 12:07:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.107 12:07:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.107 BaseBdev2_malloc 00:15:18.107 12:07:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.107 12:07:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:18.107 12:07:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.107 12:07:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.107 [2024-11-19 12:07:21.473089] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:18.107 [2024-11-19 12:07:21.473141] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:18.107 [2024-11-19 12:07:21.473159] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:18.107 [2024-11-19 12:07:21.473170] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:18.107 [2024-11-19 12:07:21.475103] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:18.107 [2024-11-19 12:07:21.475161] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:18.107 BaseBdev2 00:15:18.107 12:07:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.107 12:07:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:18.107 12:07:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:18.107 12:07:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.107 12:07:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.367 BaseBdev3_malloc 00:15:18.367 12:07:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.367 12:07:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:18.367 12:07:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.367 12:07:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.367 [2024-11-19 12:07:21.557197] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:18.367 [2024-11-19 12:07:21.557247] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:18.367 [2024-11-19 12:07:21.557268] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:18.367 [2024-11-19 12:07:21.557279] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:18.367 [2024-11-19 12:07:21.559401] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:18.367 [2024-11-19 12:07:21.559440] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:18.367 BaseBdev3 00:15:18.367 12:07:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.367 12:07:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:18.367 12:07:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.367 12:07:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.367 spare_malloc 00:15:18.367 12:07:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.367 12:07:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:18.367 12:07:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.367 12:07:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.367 spare_delay 00:15:18.367 12:07:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.367 12:07:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:18.367 12:07:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.367 12:07:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.367 [2024-11-19 12:07:21.619854] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:18.367 [2024-11-19 12:07:21.619901] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:18.367 [2024-11-19 12:07:21.619917] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:18.367 [2024-11-19 12:07:21.619942] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:18.367 [2024-11-19 12:07:21.621912] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:18.367 [2024-11-19 12:07:21.621951] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:18.367 spare 00:15:18.367 12:07:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.367 12:07:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:15:18.367 12:07:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.367 12:07:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.367 [2024-11-19 12:07:21.631892] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:18.367 [2024-11-19 12:07:21.633590] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:18.367 [2024-11-19 12:07:21.633663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:18.367 [2024-11-19 12:07:21.633740] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:18.367 [2024-11-19 12:07:21.633751] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:18.367 [2024-11-19 12:07:21.634008] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:18.367 [2024-11-19 12:07:21.639254] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:18.367 [2024-11-19 12:07:21.639277] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:18.367 [2024-11-19 12:07:21.639462] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:18.367 12:07:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.367 12:07:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:18.367 12:07:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:18.367 12:07:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:18.367 12:07:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:18.367 12:07:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:18.367 12:07:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:18.367 12:07:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:18.367 12:07:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:18.367 12:07:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:18.367 12:07:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:18.367 12:07:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.367 12:07:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.367 12:07:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.367 12:07:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.367 12:07:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.367 12:07:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:18.367 "name": "raid_bdev1", 00:15:18.367 "uuid": "cceef2bc-ba44-4105-b51a-d570fed8a7c4", 00:15:18.367 "strip_size_kb": 64, 00:15:18.367 "state": "online", 00:15:18.367 "raid_level": "raid5f", 00:15:18.367 "superblock": false, 00:15:18.367 "num_base_bdevs": 3, 00:15:18.367 "num_base_bdevs_discovered": 3, 00:15:18.367 "num_base_bdevs_operational": 3, 00:15:18.367 "base_bdevs_list": [ 00:15:18.367 { 00:15:18.367 "name": "BaseBdev1", 00:15:18.367 "uuid": "294d6bbf-d202-5c6f-b5cf-add9285a6ca7", 00:15:18.367 "is_configured": true, 00:15:18.367 "data_offset": 0, 00:15:18.367 "data_size": 65536 00:15:18.367 }, 00:15:18.367 { 00:15:18.367 "name": "BaseBdev2", 00:15:18.367 "uuid": "8b17fe39-92e4-55f6-8ed5-146900bb6d41", 00:15:18.367 "is_configured": true, 00:15:18.367 "data_offset": 0, 00:15:18.367 "data_size": 65536 00:15:18.368 }, 00:15:18.368 { 00:15:18.368 "name": "BaseBdev3", 00:15:18.368 "uuid": "66449fa6-ffde-53c1-ade7-5b30b8ac69e2", 00:15:18.368 "is_configured": true, 00:15:18.368 "data_offset": 0, 00:15:18.368 "data_size": 65536 00:15:18.368 } 00:15:18.368 ] 00:15:18.368 }' 00:15:18.368 12:07:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:18.368 12:07:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.935 12:07:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:18.935 12:07:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:18.935 12:07:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.935 12:07:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.935 [2024-11-19 12:07:22.112899] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:18.935 12:07:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.935 12:07:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:15:18.935 12:07:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.935 12:07:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.935 12:07:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.935 12:07:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:18.935 12:07:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.935 12:07:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:18.935 12:07:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:18.935 12:07:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:18.935 12:07:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:18.935 12:07:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:18.935 12:07:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:18.935 12:07:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:18.935 12:07:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:18.935 12:07:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:18.935 12:07:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:18.935 12:07:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:18.935 12:07:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:18.935 12:07:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:18.935 12:07:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:19.218 [2024-11-19 12:07:22.384301] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:19.219 /dev/nbd0 00:15:19.219 12:07:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:19.219 12:07:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:19.219 12:07:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:19.219 12:07:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:19.219 12:07:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:19.219 12:07:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:19.219 12:07:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:19.219 12:07:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:19.219 12:07:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:19.219 12:07:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:19.219 12:07:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:19.219 1+0 records in 00:15:19.219 1+0 records out 00:15:19.219 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000355732 s, 11.5 MB/s 00:15:19.219 12:07:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:19.219 12:07:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:19.219 12:07:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:19.219 12:07:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:19.219 12:07:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:19.219 12:07:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:19.219 12:07:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:19.219 12:07:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:19.219 12:07:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:15:19.219 12:07:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:15:19.219 12:07:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:15:19.478 512+0 records in 00:15:19.478 512+0 records out 00:15:19.478 67108864 bytes (67 MB, 64 MiB) copied, 0.362497 s, 185 MB/s 00:15:19.478 12:07:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:19.478 12:07:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:19.478 12:07:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:19.478 12:07:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:19.478 12:07:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:19.478 12:07:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:19.478 12:07:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:19.736 12:07:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:19.736 12:07:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:19.736 12:07:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:19.736 12:07:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:19.736 12:07:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:19.736 12:07:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:19.736 [2024-11-19 12:07:23.018362] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:19.736 12:07:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:19.736 12:07:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:19.737 12:07:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:19.737 12:07:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.737 12:07:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.737 [2024-11-19 12:07:23.026767] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:19.737 12:07:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.737 12:07:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:19.737 12:07:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:19.737 12:07:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:19.737 12:07:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:19.737 12:07:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:19.737 12:07:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:19.737 12:07:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.737 12:07:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.737 12:07:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.737 12:07:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.737 12:07:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.737 12:07:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.737 12:07:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.737 12:07:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.737 12:07:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.737 12:07:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.737 "name": "raid_bdev1", 00:15:19.737 "uuid": "cceef2bc-ba44-4105-b51a-d570fed8a7c4", 00:15:19.737 "strip_size_kb": 64, 00:15:19.737 "state": "online", 00:15:19.737 "raid_level": "raid5f", 00:15:19.737 "superblock": false, 00:15:19.737 "num_base_bdevs": 3, 00:15:19.737 "num_base_bdevs_discovered": 2, 00:15:19.737 "num_base_bdevs_operational": 2, 00:15:19.737 "base_bdevs_list": [ 00:15:19.737 { 00:15:19.737 "name": null, 00:15:19.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.737 "is_configured": false, 00:15:19.737 "data_offset": 0, 00:15:19.737 "data_size": 65536 00:15:19.737 }, 00:15:19.737 { 00:15:19.737 "name": "BaseBdev2", 00:15:19.737 "uuid": "8b17fe39-92e4-55f6-8ed5-146900bb6d41", 00:15:19.737 "is_configured": true, 00:15:19.737 "data_offset": 0, 00:15:19.737 "data_size": 65536 00:15:19.737 }, 00:15:19.737 { 00:15:19.737 "name": "BaseBdev3", 00:15:19.737 "uuid": "66449fa6-ffde-53c1-ade7-5b30b8ac69e2", 00:15:19.737 "is_configured": true, 00:15:19.737 "data_offset": 0, 00:15:19.737 "data_size": 65536 00:15:19.737 } 00:15:19.737 ] 00:15:19.737 }' 00:15:19.737 12:07:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.737 12:07:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.305 12:07:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:20.305 12:07:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.305 12:07:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.305 [2024-11-19 12:07:23.478029] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:20.305 [2024-11-19 12:07:23.495232] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:15:20.305 12:07:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.305 12:07:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:20.305 [2024-11-19 12:07:23.502737] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:21.243 12:07:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:21.243 12:07:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:21.243 12:07:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:21.243 12:07:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:21.243 12:07:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:21.243 12:07:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.243 12:07:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.243 12:07:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.243 12:07:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.243 12:07:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.243 12:07:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:21.243 "name": "raid_bdev1", 00:15:21.243 "uuid": "cceef2bc-ba44-4105-b51a-d570fed8a7c4", 00:15:21.243 "strip_size_kb": 64, 00:15:21.243 "state": "online", 00:15:21.243 "raid_level": "raid5f", 00:15:21.243 "superblock": false, 00:15:21.243 "num_base_bdevs": 3, 00:15:21.243 "num_base_bdevs_discovered": 3, 00:15:21.243 "num_base_bdevs_operational": 3, 00:15:21.243 "process": { 00:15:21.243 "type": "rebuild", 00:15:21.243 "target": "spare", 00:15:21.243 "progress": { 00:15:21.243 "blocks": 20480, 00:15:21.243 "percent": 15 00:15:21.243 } 00:15:21.243 }, 00:15:21.243 "base_bdevs_list": [ 00:15:21.243 { 00:15:21.243 "name": "spare", 00:15:21.243 "uuid": "3003bb9e-0f91-5264-8ff8-e851f2ae7e00", 00:15:21.243 "is_configured": true, 00:15:21.243 "data_offset": 0, 00:15:21.243 "data_size": 65536 00:15:21.243 }, 00:15:21.243 { 00:15:21.243 "name": "BaseBdev2", 00:15:21.243 "uuid": "8b17fe39-92e4-55f6-8ed5-146900bb6d41", 00:15:21.243 "is_configured": true, 00:15:21.243 "data_offset": 0, 00:15:21.243 "data_size": 65536 00:15:21.243 }, 00:15:21.243 { 00:15:21.243 "name": "BaseBdev3", 00:15:21.243 "uuid": "66449fa6-ffde-53c1-ade7-5b30b8ac69e2", 00:15:21.243 "is_configured": true, 00:15:21.243 "data_offset": 0, 00:15:21.243 "data_size": 65536 00:15:21.243 } 00:15:21.243 ] 00:15:21.243 }' 00:15:21.243 12:07:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:21.243 12:07:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:21.243 12:07:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:21.502 12:07:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:21.502 12:07:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:21.502 12:07:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.502 12:07:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.502 [2024-11-19 12:07:24.629689] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:21.502 [2024-11-19 12:07:24.710264] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:21.502 [2024-11-19 12:07:24.710333] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:21.502 [2024-11-19 12:07:24.710351] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:21.502 [2024-11-19 12:07:24.710358] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:21.502 12:07:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.502 12:07:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:21.502 12:07:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:21.502 12:07:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:21.502 12:07:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:21.502 12:07:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:21.502 12:07:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:21.502 12:07:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:21.502 12:07:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:21.502 12:07:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:21.503 12:07:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:21.503 12:07:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.503 12:07:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.503 12:07:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.503 12:07:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.503 12:07:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.503 12:07:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:21.503 "name": "raid_bdev1", 00:15:21.503 "uuid": "cceef2bc-ba44-4105-b51a-d570fed8a7c4", 00:15:21.503 "strip_size_kb": 64, 00:15:21.503 "state": "online", 00:15:21.503 "raid_level": "raid5f", 00:15:21.503 "superblock": false, 00:15:21.503 "num_base_bdevs": 3, 00:15:21.503 "num_base_bdevs_discovered": 2, 00:15:21.503 "num_base_bdevs_operational": 2, 00:15:21.503 "base_bdevs_list": [ 00:15:21.503 { 00:15:21.503 "name": null, 00:15:21.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.503 "is_configured": false, 00:15:21.503 "data_offset": 0, 00:15:21.503 "data_size": 65536 00:15:21.503 }, 00:15:21.503 { 00:15:21.503 "name": "BaseBdev2", 00:15:21.503 "uuid": "8b17fe39-92e4-55f6-8ed5-146900bb6d41", 00:15:21.503 "is_configured": true, 00:15:21.503 "data_offset": 0, 00:15:21.503 "data_size": 65536 00:15:21.503 }, 00:15:21.503 { 00:15:21.503 "name": "BaseBdev3", 00:15:21.503 "uuid": "66449fa6-ffde-53c1-ade7-5b30b8ac69e2", 00:15:21.503 "is_configured": true, 00:15:21.503 "data_offset": 0, 00:15:21.503 "data_size": 65536 00:15:21.503 } 00:15:21.503 ] 00:15:21.503 }' 00:15:21.503 12:07:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:21.503 12:07:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.070 12:07:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:22.070 12:07:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:22.070 12:07:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:22.070 12:07:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:22.070 12:07:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:22.070 12:07:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.070 12:07:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.070 12:07:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.070 12:07:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.070 12:07:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.070 12:07:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:22.070 "name": "raid_bdev1", 00:15:22.070 "uuid": "cceef2bc-ba44-4105-b51a-d570fed8a7c4", 00:15:22.070 "strip_size_kb": 64, 00:15:22.070 "state": "online", 00:15:22.070 "raid_level": "raid5f", 00:15:22.070 "superblock": false, 00:15:22.070 "num_base_bdevs": 3, 00:15:22.070 "num_base_bdevs_discovered": 2, 00:15:22.070 "num_base_bdevs_operational": 2, 00:15:22.070 "base_bdevs_list": [ 00:15:22.070 { 00:15:22.070 "name": null, 00:15:22.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.070 "is_configured": false, 00:15:22.070 "data_offset": 0, 00:15:22.070 "data_size": 65536 00:15:22.070 }, 00:15:22.070 { 00:15:22.070 "name": "BaseBdev2", 00:15:22.070 "uuid": "8b17fe39-92e4-55f6-8ed5-146900bb6d41", 00:15:22.070 "is_configured": true, 00:15:22.070 "data_offset": 0, 00:15:22.070 "data_size": 65536 00:15:22.070 }, 00:15:22.070 { 00:15:22.070 "name": "BaseBdev3", 00:15:22.070 "uuid": "66449fa6-ffde-53c1-ade7-5b30b8ac69e2", 00:15:22.070 "is_configured": true, 00:15:22.070 "data_offset": 0, 00:15:22.070 "data_size": 65536 00:15:22.070 } 00:15:22.070 ] 00:15:22.070 }' 00:15:22.070 12:07:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:22.070 12:07:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:22.070 12:07:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:22.070 12:07:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:22.070 12:07:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:22.070 12:07:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.070 12:07:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.070 [2024-11-19 12:07:25.307336] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:22.070 [2024-11-19 12:07:25.323546] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:15:22.070 12:07:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.070 12:07:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:22.070 [2024-11-19 12:07:25.331468] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:23.007 12:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:23.007 12:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:23.007 12:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:23.007 12:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:23.007 12:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:23.007 12:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.007 12:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.007 12:07:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.007 12:07:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.007 12:07:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.266 12:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:23.266 "name": "raid_bdev1", 00:15:23.266 "uuid": "cceef2bc-ba44-4105-b51a-d570fed8a7c4", 00:15:23.266 "strip_size_kb": 64, 00:15:23.266 "state": "online", 00:15:23.266 "raid_level": "raid5f", 00:15:23.266 "superblock": false, 00:15:23.266 "num_base_bdevs": 3, 00:15:23.266 "num_base_bdevs_discovered": 3, 00:15:23.266 "num_base_bdevs_operational": 3, 00:15:23.266 "process": { 00:15:23.266 "type": "rebuild", 00:15:23.266 "target": "spare", 00:15:23.266 "progress": { 00:15:23.266 "blocks": 20480, 00:15:23.266 "percent": 15 00:15:23.266 } 00:15:23.266 }, 00:15:23.266 "base_bdevs_list": [ 00:15:23.266 { 00:15:23.266 "name": "spare", 00:15:23.266 "uuid": "3003bb9e-0f91-5264-8ff8-e851f2ae7e00", 00:15:23.266 "is_configured": true, 00:15:23.266 "data_offset": 0, 00:15:23.266 "data_size": 65536 00:15:23.266 }, 00:15:23.266 { 00:15:23.266 "name": "BaseBdev2", 00:15:23.266 "uuid": "8b17fe39-92e4-55f6-8ed5-146900bb6d41", 00:15:23.266 "is_configured": true, 00:15:23.266 "data_offset": 0, 00:15:23.266 "data_size": 65536 00:15:23.266 }, 00:15:23.266 { 00:15:23.266 "name": "BaseBdev3", 00:15:23.266 "uuid": "66449fa6-ffde-53c1-ade7-5b30b8ac69e2", 00:15:23.266 "is_configured": true, 00:15:23.266 "data_offset": 0, 00:15:23.266 "data_size": 65536 00:15:23.266 } 00:15:23.266 ] 00:15:23.266 }' 00:15:23.266 12:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:23.266 12:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:23.266 12:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:23.266 12:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:23.266 12:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:23.266 12:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:15:23.266 12:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:23.266 12:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=539 00:15:23.266 12:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:23.266 12:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:23.266 12:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:23.266 12:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:23.266 12:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:23.266 12:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:23.266 12:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.266 12:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.266 12:07:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.266 12:07:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.266 12:07:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.266 12:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:23.266 "name": "raid_bdev1", 00:15:23.266 "uuid": "cceef2bc-ba44-4105-b51a-d570fed8a7c4", 00:15:23.266 "strip_size_kb": 64, 00:15:23.266 "state": "online", 00:15:23.266 "raid_level": "raid5f", 00:15:23.266 "superblock": false, 00:15:23.266 "num_base_bdevs": 3, 00:15:23.266 "num_base_bdevs_discovered": 3, 00:15:23.266 "num_base_bdevs_operational": 3, 00:15:23.266 "process": { 00:15:23.266 "type": "rebuild", 00:15:23.266 "target": "spare", 00:15:23.266 "progress": { 00:15:23.266 "blocks": 22528, 00:15:23.266 "percent": 17 00:15:23.266 } 00:15:23.266 }, 00:15:23.266 "base_bdevs_list": [ 00:15:23.266 { 00:15:23.266 "name": "spare", 00:15:23.266 "uuid": "3003bb9e-0f91-5264-8ff8-e851f2ae7e00", 00:15:23.266 "is_configured": true, 00:15:23.266 "data_offset": 0, 00:15:23.266 "data_size": 65536 00:15:23.266 }, 00:15:23.266 { 00:15:23.266 "name": "BaseBdev2", 00:15:23.266 "uuid": "8b17fe39-92e4-55f6-8ed5-146900bb6d41", 00:15:23.266 "is_configured": true, 00:15:23.266 "data_offset": 0, 00:15:23.266 "data_size": 65536 00:15:23.266 }, 00:15:23.266 { 00:15:23.266 "name": "BaseBdev3", 00:15:23.266 "uuid": "66449fa6-ffde-53c1-ade7-5b30b8ac69e2", 00:15:23.266 "is_configured": true, 00:15:23.266 "data_offset": 0, 00:15:23.266 "data_size": 65536 00:15:23.266 } 00:15:23.266 ] 00:15:23.266 }' 00:15:23.266 12:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:23.266 12:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:23.266 12:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:23.266 12:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:23.267 12:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:24.647 12:07:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:24.647 12:07:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:24.647 12:07:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:24.647 12:07:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:24.647 12:07:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:24.647 12:07:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:24.647 12:07:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.647 12:07:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.647 12:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.647 12:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.647 12:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.647 12:07:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:24.648 "name": "raid_bdev1", 00:15:24.648 "uuid": "cceef2bc-ba44-4105-b51a-d570fed8a7c4", 00:15:24.648 "strip_size_kb": 64, 00:15:24.648 "state": "online", 00:15:24.648 "raid_level": "raid5f", 00:15:24.648 "superblock": false, 00:15:24.648 "num_base_bdevs": 3, 00:15:24.648 "num_base_bdevs_discovered": 3, 00:15:24.648 "num_base_bdevs_operational": 3, 00:15:24.648 "process": { 00:15:24.648 "type": "rebuild", 00:15:24.648 "target": "spare", 00:15:24.648 "progress": { 00:15:24.648 "blocks": 45056, 00:15:24.648 "percent": 34 00:15:24.648 } 00:15:24.648 }, 00:15:24.648 "base_bdevs_list": [ 00:15:24.648 { 00:15:24.648 "name": "spare", 00:15:24.648 "uuid": "3003bb9e-0f91-5264-8ff8-e851f2ae7e00", 00:15:24.648 "is_configured": true, 00:15:24.648 "data_offset": 0, 00:15:24.648 "data_size": 65536 00:15:24.648 }, 00:15:24.648 { 00:15:24.648 "name": "BaseBdev2", 00:15:24.648 "uuid": "8b17fe39-92e4-55f6-8ed5-146900bb6d41", 00:15:24.648 "is_configured": true, 00:15:24.648 "data_offset": 0, 00:15:24.648 "data_size": 65536 00:15:24.648 }, 00:15:24.648 { 00:15:24.648 "name": "BaseBdev3", 00:15:24.648 "uuid": "66449fa6-ffde-53c1-ade7-5b30b8ac69e2", 00:15:24.648 "is_configured": true, 00:15:24.648 "data_offset": 0, 00:15:24.648 "data_size": 65536 00:15:24.648 } 00:15:24.648 ] 00:15:24.648 }' 00:15:24.648 12:07:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:24.648 12:07:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:24.648 12:07:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:24.648 12:07:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:24.648 12:07:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:25.587 12:07:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:25.587 12:07:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:25.587 12:07:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:25.587 12:07:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:25.587 12:07:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:25.587 12:07:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:25.587 12:07:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.587 12:07:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.587 12:07:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.587 12:07:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.587 12:07:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.587 12:07:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:25.587 "name": "raid_bdev1", 00:15:25.587 "uuid": "cceef2bc-ba44-4105-b51a-d570fed8a7c4", 00:15:25.587 "strip_size_kb": 64, 00:15:25.587 "state": "online", 00:15:25.587 "raid_level": "raid5f", 00:15:25.587 "superblock": false, 00:15:25.587 "num_base_bdevs": 3, 00:15:25.587 "num_base_bdevs_discovered": 3, 00:15:25.587 "num_base_bdevs_operational": 3, 00:15:25.587 "process": { 00:15:25.587 "type": "rebuild", 00:15:25.587 "target": "spare", 00:15:25.587 "progress": { 00:15:25.587 "blocks": 69632, 00:15:25.587 "percent": 53 00:15:25.587 } 00:15:25.587 }, 00:15:25.587 "base_bdevs_list": [ 00:15:25.587 { 00:15:25.587 "name": "spare", 00:15:25.587 "uuid": "3003bb9e-0f91-5264-8ff8-e851f2ae7e00", 00:15:25.587 "is_configured": true, 00:15:25.587 "data_offset": 0, 00:15:25.587 "data_size": 65536 00:15:25.587 }, 00:15:25.587 { 00:15:25.587 "name": "BaseBdev2", 00:15:25.587 "uuid": "8b17fe39-92e4-55f6-8ed5-146900bb6d41", 00:15:25.587 "is_configured": true, 00:15:25.587 "data_offset": 0, 00:15:25.587 "data_size": 65536 00:15:25.587 }, 00:15:25.587 { 00:15:25.587 "name": "BaseBdev3", 00:15:25.587 "uuid": "66449fa6-ffde-53c1-ade7-5b30b8ac69e2", 00:15:25.587 "is_configured": true, 00:15:25.587 "data_offset": 0, 00:15:25.587 "data_size": 65536 00:15:25.587 } 00:15:25.587 ] 00:15:25.587 }' 00:15:25.587 12:07:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:25.587 12:07:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:25.587 12:07:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:25.587 12:07:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:25.587 12:07:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:26.972 12:07:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:26.972 12:07:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:26.972 12:07:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:26.972 12:07:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:26.972 12:07:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:26.972 12:07:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:26.972 12:07:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.972 12:07:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.972 12:07:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.972 12:07:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.972 12:07:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.972 12:07:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:26.972 "name": "raid_bdev1", 00:15:26.972 "uuid": "cceef2bc-ba44-4105-b51a-d570fed8a7c4", 00:15:26.972 "strip_size_kb": 64, 00:15:26.972 "state": "online", 00:15:26.972 "raid_level": "raid5f", 00:15:26.972 "superblock": false, 00:15:26.972 "num_base_bdevs": 3, 00:15:26.972 "num_base_bdevs_discovered": 3, 00:15:26.972 "num_base_bdevs_operational": 3, 00:15:26.972 "process": { 00:15:26.972 "type": "rebuild", 00:15:26.972 "target": "spare", 00:15:26.972 "progress": { 00:15:26.972 "blocks": 92160, 00:15:26.972 "percent": 70 00:15:26.972 } 00:15:26.972 }, 00:15:26.972 "base_bdevs_list": [ 00:15:26.972 { 00:15:26.972 "name": "spare", 00:15:26.972 "uuid": "3003bb9e-0f91-5264-8ff8-e851f2ae7e00", 00:15:26.972 "is_configured": true, 00:15:26.972 "data_offset": 0, 00:15:26.972 "data_size": 65536 00:15:26.972 }, 00:15:26.972 { 00:15:26.972 "name": "BaseBdev2", 00:15:26.972 "uuid": "8b17fe39-92e4-55f6-8ed5-146900bb6d41", 00:15:26.972 "is_configured": true, 00:15:26.972 "data_offset": 0, 00:15:26.972 "data_size": 65536 00:15:26.972 }, 00:15:26.972 { 00:15:26.972 "name": "BaseBdev3", 00:15:26.972 "uuid": "66449fa6-ffde-53c1-ade7-5b30b8ac69e2", 00:15:26.972 "is_configured": true, 00:15:26.972 "data_offset": 0, 00:15:26.972 "data_size": 65536 00:15:26.972 } 00:15:26.972 ] 00:15:26.972 }' 00:15:26.972 12:07:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:26.972 12:07:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:26.972 12:07:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:26.972 12:07:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:26.972 12:07:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:27.919 12:07:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:27.919 12:07:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:27.919 12:07:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:27.919 12:07:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:27.919 12:07:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:27.919 12:07:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:27.919 12:07:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.919 12:07:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.919 12:07:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.919 12:07:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.919 12:07:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.919 12:07:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:27.919 "name": "raid_bdev1", 00:15:27.919 "uuid": "cceef2bc-ba44-4105-b51a-d570fed8a7c4", 00:15:27.919 "strip_size_kb": 64, 00:15:27.919 "state": "online", 00:15:27.919 "raid_level": "raid5f", 00:15:27.919 "superblock": false, 00:15:27.919 "num_base_bdevs": 3, 00:15:27.919 "num_base_bdevs_discovered": 3, 00:15:27.919 "num_base_bdevs_operational": 3, 00:15:27.919 "process": { 00:15:27.919 "type": "rebuild", 00:15:27.919 "target": "spare", 00:15:27.919 "progress": { 00:15:27.919 "blocks": 114688, 00:15:27.919 "percent": 87 00:15:27.919 } 00:15:27.919 }, 00:15:27.919 "base_bdevs_list": [ 00:15:27.919 { 00:15:27.919 "name": "spare", 00:15:27.919 "uuid": "3003bb9e-0f91-5264-8ff8-e851f2ae7e00", 00:15:27.919 "is_configured": true, 00:15:27.919 "data_offset": 0, 00:15:27.919 "data_size": 65536 00:15:27.919 }, 00:15:27.919 { 00:15:27.919 "name": "BaseBdev2", 00:15:27.919 "uuid": "8b17fe39-92e4-55f6-8ed5-146900bb6d41", 00:15:27.919 "is_configured": true, 00:15:27.919 "data_offset": 0, 00:15:27.920 "data_size": 65536 00:15:27.920 }, 00:15:27.920 { 00:15:27.920 "name": "BaseBdev3", 00:15:27.920 "uuid": "66449fa6-ffde-53c1-ade7-5b30b8ac69e2", 00:15:27.920 "is_configured": true, 00:15:27.920 "data_offset": 0, 00:15:27.920 "data_size": 65536 00:15:27.920 } 00:15:27.920 ] 00:15:27.920 }' 00:15:27.920 12:07:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:27.920 12:07:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:27.920 12:07:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:27.920 12:07:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:27.920 12:07:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:28.489 [2024-11-19 12:07:31.770486] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:28.489 [2024-11-19 12:07:31.770578] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:28.489 [2024-11-19 12:07:31.770613] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:29.058 12:07:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:29.058 12:07:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:29.058 12:07:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:29.058 12:07:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:29.058 12:07:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:29.058 12:07:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:29.058 12:07:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.058 12:07:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.058 12:07:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.058 12:07:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.058 12:07:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.058 12:07:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:29.058 "name": "raid_bdev1", 00:15:29.058 "uuid": "cceef2bc-ba44-4105-b51a-d570fed8a7c4", 00:15:29.058 "strip_size_kb": 64, 00:15:29.058 "state": "online", 00:15:29.058 "raid_level": "raid5f", 00:15:29.058 "superblock": false, 00:15:29.058 "num_base_bdevs": 3, 00:15:29.058 "num_base_bdevs_discovered": 3, 00:15:29.058 "num_base_bdevs_operational": 3, 00:15:29.058 "base_bdevs_list": [ 00:15:29.058 { 00:15:29.058 "name": "spare", 00:15:29.058 "uuid": "3003bb9e-0f91-5264-8ff8-e851f2ae7e00", 00:15:29.058 "is_configured": true, 00:15:29.058 "data_offset": 0, 00:15:29.058 "data_size": 65536 00:15:29.058 }, 00:15:29.058 { 00:15:29.058 "name": "BaseBdev2", 00:15:29.058 "uuid": "8b17fe39-92e4-55f6-8ed5-146900bb6d41", 00:15:29.058 "is_configured": true, 00:15:29.058 "data_offset": 0, 00:15:29.058 "data_size": 65536 00:15:29.058 }, 00:15:29.058 { 00:15:29.058 "name": "BaseBdev3", 00:15:29.058 "uuid": "66449fa6-ffde-53c1-ade7-5b30b8ac69e2", 00:15:29.058 "is_configured": true, 00:15:29.058 "data_offset": 0, 00:15:29.058 "data_size": 65536 00:15:29.058 } 00:15:29.058 ] 00:15:29.058 }' 00:15:29.058 12:07:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:29.058 12:07:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:29.058 12:07:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:29.058 12:07:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:29.058 12:07:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:15:29.058 12:07:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:29.058 12:07:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:29.058 12:07:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:29.058 12:07:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:29.058 12:07:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:29.058 12:07:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.058 12:07:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.058 12:07:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.058 12:07:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.058 12:07:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.058 12:07:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:29.058 "name": "raid_bdev1", 00:15:29.058 "uuid": "cceef2bc-ba44-4105-b51a-d570fed8a7c4", 00:15:29.058 "strip_size_kb": 64, 00:15:29.058 "state": "online", 00:15:29.058 "raid_level": "raid5f", 00:15:29.058 "superblock": false, 00:15:29.058 "num_base_bdevs": 3, 00:15:29.058 "num_base_bdevs_discovered": 3, 00:15:29.058 "num_base_bdevs_operational": 3, 00:15:29.058 "base_bdevs_list": [ 00:15:29.058 { 00:15:29.058 "name": "spare", 00:15:29.058 "uuid": "3003bb9e-0f91-5264-8ff8-e851f2ae7e00", 00:15:29.058 "is_configured": true, 00:15:29.058 "data_offset": 0, 00:15:29.058 "data_size": 65536 00:15:29.058 }, 00:15:29.058 { 00:15:29.058 "name": "BaseBdev2", 00:15:29.058 "uuid": "8b17fe39-92e4-55f6-8ed5-146900bb6d41", 00:15:29.058 "is_configured": true, 00:15:29.058 "data_offset": 0, 00:15:29.058 "data_size": 65536 00:15:29.058 }, 00:15:29.058 { 00:15:29.058 "name": "BaseBdev3", 00:15:29.058 "uuid": "66449fa6-ffde-53c1-ade7-5b30b8ac69e2", 00:15:29.058 "is_configured": true, 00:15:29.058 "data_offset": 0, 00:15:29.058 "data_size": 65536 00:15:29.058 } 00:15:29.058 ] 00:15:29.058 }' 00:15:29.058 12:07:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:29.058 12:07:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:29.058 12:07:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:29.318 12:07:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:29.318 12:07:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:29.318 12:07:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:29.318 12:07:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:29.318 12:07:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:29.318 12:07:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:29.318 12:07:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:29.318 12:07:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.318 12:07:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.318 12:07:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.318 12:07:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.318 12:07:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.318 12:07:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.318 12:07:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.318 12:07:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.318 12:07:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.318 12:07:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.318 "name": "raid_bdev1", 00:15:29.318 "uuid": "cceef2bc-ba44-4105-b51a-d570fed8a7c4", 00:15:29.318 "strip_size_kb": 64, 00:15:29.318 "state": "online", 00:15:29.318 "raid_level": "raid5f", 00:15:29.318 "superblock": false, 00:15:29.318 "num_base_bdevs": 3, 00:15:29.318 "num_base_bdevs_discovered": 3, 00:15:29.318 "num_base_bdevs_operational": 3, 00:15:29.318 "base_bdevs_list": [ 00:15:29.318 { 00:15:29.318 "name": "spare", 00:15:29.318 "uuid": "3003bb9e-0f91-5264-8ff8-e851f2ae7e00", 00:15:29.318 "is_configured": true, 00:15:29.318 "data_offset": 0, 00:15:29.318 "data_size": 65536 00:15:29.318 }, 00:15:29.318 { 00:15:29.318 "name": "BaseBdev2", 00:15:29.318 "uuid": "8b17fe39-92e4-55f6-8ed5-146900bb6d41", 00:15:29.318 "is_configured": true, 00:15:29.318 "data_offset": 0, 00:15:29.318 "data_size": 65536 00:15:29.318 }, 00:15:29.318 { 00:15:29.318 "name": "BaseBdev3", 00:15:29.318 "uuid": "66449fa6-ffde-53c1-ade7-5b30b8ac69e2", 00:15:29.318 "is_configured": true, 00:15:29.318 "data_offset": 0, 00:15:29.318 "data_size": 65536 00:15:29.318 } 00:15:29.318 ] 00:15:29.318 }' 00:15:29.318 12:07:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.318 12:07:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.578 12:07:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:29.578 12:07:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.578 12:07:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.578 [2024-11-19 12:07:32.903421] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:29.578 [2024-11-19 12:07:32.903455] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:29.578 [2024-11-19 12:07:32.903544] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:29.578 [2024-11-19 12:07:32.903646] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:29.578 [2024-11-19 12:07:32.903668] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:29.578 12:07:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.578 12:07:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.578 12:07:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.578 12:07:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:15:29.578 12:07:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.578 12:07:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.839 12:07:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:29.839 12:07:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:29.839 12:07:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:29.839 12:07:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:29.839 12:07:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:29.839 12:07:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:29.839 12:07:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:29.839 12:07:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:29.839 12:07:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:29.839 12:07:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:29.839 12:07:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:29.839 12:07:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:29.839 12:07:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:29.839 /dev/nbd0 00:15:29.839 12:07:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:29.839 12:07:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:29.839 12:07:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:29.839 12:07:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:29.839 12:07:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:29.839 12:07:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:29.839 12:07:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:29.839 12:07:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:29.839 12:07:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:29.839 12:07:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:29.839 12:07:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:29.839 1+0 records in 00:15:29.839 1+0 records out 00:15:29.839 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000214522 s, 19.1 MB/s 00:15:29.839 12:07:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:29.839 12:07:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:29.839 12:07:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:29.839 12:07:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:29.839 12:07:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:29.839 12:07:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:29.839 12:07:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:29.839 12:07:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:30.099 /dev/nbd1 00:15:30.099 12:07:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:30.099 12:07:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:30.099 12:07:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:30.099 12:07:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:30.099 12:07:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:30.099 12:07:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:30.099 12:07:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:30.099 12:07:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:30.099 12:07:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:30.099 12:07:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:30.099 12:07:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:30.099 1+0 records in 00:15:30.099 1+0 records out 00:15:30.099 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000238179 s, 17.2 MB/s 00:15:30.099 12:07:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:30.099 12:07:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:30.099 12:07:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:30.099 12:07:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:30.099 12:07:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:30.099 12:07:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:30.099 12:07:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:30.099 12:07:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:30.359 12:07:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:30.359 12:07:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:30.359 12:07:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:30.359 12:07:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:30.359 12:07:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:30.359 12:07:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:30.359 12:07:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:30.619 12:07:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:30.619 12:07:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:30.619 12:07:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:30.619 12:07:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:30.619 12:07:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:30.619 12:07:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:30.619 12:07:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:30.619 12:07:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:30.619 12:07:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:30.619 12:07:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:30.879 12:07:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:30.879 12:07:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:30.879 12:07:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:30.879 12:07:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:30.879 12:07:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:30.879 12:07:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:30.879 12:07:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:30.879 12:07:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:30.879 12:07:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:30.879 12:07:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 81512 00:15:30.879 12:07:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 81512 ']' 00:15:30.879 12:07:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 81512 00:15:30.879 12:07:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:15:30.879 12:07:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:30.879 12:07:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81512 00:15:30.879 killing process with pid 81512 00:15:30.879 Received shutdown signal, test time was about 60.000000 seconds 00:15:30.879 00:15:30.879 Latency(us) 00:15:30.879 [2024-11-19T12:07:34.256Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:30.879 [2024-11-19T12:07:34.256Z] =================================================================================================================== 00:15:30.879 [2024-11-19T12:07:34.256Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:30.879 12:07:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:30.879 12:07:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:30.879 12:07:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81512' 00:15:30.879 12:07:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 81512 00:15:30.879 [2024-11-19 12:07:34.126171] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:30.879 12:07:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 81512 00:15:31.139 [2024-11-19 12:07:34.513128] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:32.525 ************************************ 00:15:32.525 END TEST raid5f_rebuild_test 00:15:32.525 12:07:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:15:32.525 00:15:32.525 real 0m15.104s 00:15:32.525 user 0m18.578s 00:15:32.525 sys 0m1.946s 00:15:32.525 12:07:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:32.525 12:07:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.525 ************************************ 00:15:32.525 12:07:35 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:15:32.525 12:07:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:32.525 12:07:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:32.525 12:07:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:32.525 ************************************ 00:15:32.525 START TEST raid5f_rebuild_test_sb 00:15:32.525 ************************************ 00:15:32.525 12:07:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:15:32.525 12:07:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:32.525 12:07:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:15:32.525 12:07:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:32.525 12:07:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:32.525 12:07:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:32.525 12:07:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:32.525 12:07:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:32.525 12:07:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:32.525 12:07:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:32.525 12:07:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:32.526 12:07:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:32.526 12:07:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:32.526 12:07:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:32.526 12:07:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:32.526 12:07:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:32.526 12:07:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:32.526 12:07:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:32.526 12:07:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:32.526 12:07:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:32.526 12:07:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:32.526 12:07:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:32.526 12:07:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:32.526 12:07:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:32.526 12:07:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:32.526 12:07:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:32.526 12:07:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:32.526 12:07:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:32.526 12:07:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:32.526 12:07:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:32.526 12:07:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=81946 00:15:32.526 12:07:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:32.526 12:07:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 81946 00:15:32.526 12:07:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 81946 ']' 00:15:32.526 12:07:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:32.526 12:07:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:32.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:32.526 12:07:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:32.526 12:07:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:32.526 12:07:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.526 [2024-11-19 12:07:35.736770] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:15:32.526 [2024-11-19 12:07:35.736897] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81946 ] 00:15:32.526 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:32.526 Zero copy mechanism will not be used. 00:15:32.793 [2024-11-19 12:07:35.909315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:32.793 [2024-11-19 12:07:36.025637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.052 [2024-11-19 12:07:36.223104] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:33.052 [2024-11-19 12:07:36.223169] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:33.312 12:07:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:33.312 12:07:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:33.312 12:07:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:33.312 12:07:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:33.312 12:07:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.312 12:07:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.312 BaseBdev1_malloc 00:15:33.312 12:07:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.312 12:07:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:33.312 12:07:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.312 12:07:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.312 [2024-11-19 12:07:36.596161] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:33.312 [2024-11-19 12:07:36.596243] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:33.312 [2024-11-19 12:07:36.596267] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:33.312 [2024-11-19 12:07:36.596278] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:33.312 [2024-11-19 12:07:36.598309] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:33.312 [2024-11-19 12:07:36.598348] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:33.312 BaseBdev1 00:15:33.312 12:07:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.312 12:07:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:33.312 12:07:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:33.312 12:07:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.312 12:07:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.312 BaseBdev2_malloc 00:15:33.312 12:07:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.312 12:07:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:33.312 12:07:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.312 12:07:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.312 [2024-11-19 12:07:36.649541] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:33.312 [2024-11-19 12:07:36.649602] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:33.312 [2024-11-19 12:07:36.649619] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:33.312 [2024-11-19 12:07:36.649632] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:33.312 [2024-11-19 12:07:36.651670] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:33.312 [2024-11-19 12:07:36.651710] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:33.312 BaseBdev2 00:15:33.312 12:07:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.312 12:07:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:33.312 12:07:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:33.312 12:07:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.312 12:07:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.572 BaseBdev3_malloc 00:15:33.572 12:07:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.572 12:07:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:33.572 12:07:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.572 12:07:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.572 [2024-11-19 12:07:36.738472] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:33.572 [2024-11-19 12:07:36.738526] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:33.572 [2024-11-19 12:07:36.738548] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:33.572 [2024-11-19 12:07:36.738559] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:33.572 [2024-11-19 12:07:36.740599] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:33.572 [2024-11-19 12:07:36.740639] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:33.572 BaseBdev3 00:15:33.572 12:07:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.572 12:07:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:33.572 12:07:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.572 12:07:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.572 spare_malloc 00:15:33.572 12:07:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.572 12:07:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:33.572 12:07:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.572 12:07:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.572 spare_delay 00:15:33.572 12:07:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.572 12:07:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:33.572 12:07:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.572 12:07:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.572 [2024-11-19 12:07:36.801686] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:33.572 [2024-11-19 12:07:36.801735] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:33.572 [2024-11-19 12:07:36.801767] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:33.572 [2024-11-19 12:07:36.801777] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:33.572 [2024-11-19 12:07:36.803763] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:33.572 [2024-11-19 12:07:36.803802] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:33.572 spare 00:15:33.572 12:07:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.572 12:07:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:15:33.572 12:07:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.572 12:07:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.572 [2024-11-19 12:07:36.813733] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:33.572 [2024-11-19 12:07:36.815440] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:33.572 [2024-11-19 12:07:36.815505] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:33.572 [2024-11-19 12:07:36.815666] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:33.572 [2024-11-19 12:07:36.815687] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:33.572 [2024-11-19 12:07:36.815926] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:33.572 [2024-11-19 12:07:36.821489] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:33.572 [2024-11-19 12:07:36.821512] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:33.572 [2024-11-19 12:07:36.821697] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:33.572 12:07:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.572 12:07:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:33.572 12:07:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:33.572 12:07:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:33.572 12:07:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:33.572 12:07:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:33.572 12:07:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:33.572 12:07:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.572 12:07:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.572 12:07:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.572 12:07:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:33.572 12:07:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.572 12:07:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.572 12:07:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.572 12:07:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.572 12:07:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.572 12:07:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:33.572 "name": "raid_bdev1", 00:15:33.572 "uuid": "189fa557-68b5-40ef-a02f-9614198e5932", 00:15:33.572 "strip_size_kb": 64, 00:15:33.572 "state": "online", 00:15:33.572 "raid_level": "raid5f", 00:15:33.572 "superblock": true, 00:15:33.572 "num_base_bdevs": 3, 00:15:33.572 "num_base_bdevs_discovered": 3, 00:15:33.572 "num_base_bdevs_operational": 3, 00:15:33.572 "base_bdevs_list": [ 00:15:33.572 { 00:15:33.572 "name": "BaseBdev1", 00:15:33.572 "uuid": "7960e653-7909-526d-bee4-3908245ab406", 00:15:33.572 "is_configured": true, 00:15:33.572 "data_offset": 2048, 00:15:33.572 "data_size": 63488 00:15:33.572 }, 00:15:33.572 { 00:15:33.572 "name": "BaseBdev2", 00:15:33.572 "uuid": "a6804593-48ed-5d8f-a722-6b09e84488c1", 00:15:33.572 "is_configured": true, 00:15:33.572 "data_offset": 2048, 00:15:33.572 "data_size": 63488 00:15:33.572 }, 00:15:33.572 { 00:15:33.572 "name": "BaseBdev3", 00:15:33.572 "uuid": "152f4d26-1da0-5376-bfb6-83663ab5af11", 00:15:33.572 "is_configured": true, 00:15:33.572 "data_offset": 2048, 00:15:33.572 "data_size": 63488 00:15:33.572 } 00:15:33.572 ] 00:15:33.572 }' 00:15:33.572 12:07:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:33.572 12:07:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.141 12:07:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:34.141 12:07:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:34.141 12:07:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.141 12:07:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.141 [2024-11-19 12:07:37.271417] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:34.141 12:07:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.141 12:07:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:15:34.141 12:07:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:34.141 12:07:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.141 12:07:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.141 12:07:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.141 12:07:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.141 12:07:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:34.141 12:07:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:34.141 12:07:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:34.141 12:07:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:34.141 12:07:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:34.141 12:07:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:34.141 12:07:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:34.141 12:07:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:34.141 12:07:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:34.141 12:07:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:34.141 12:07:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:34.141 12:07:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:34.141 12:07:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:34.142 12:07:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:34.401 [2024-11-19 12:07:37.530842] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:34.401 /dev/nbd0 00:15:34.401 12:07:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:34.401 12:07:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:34.401 12:07:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:34.401 12:07:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:34.401 12:07:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:34.401 12:07:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:34.401 12:07:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:34.401 12:07:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:34.401 12:07:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:34.401 12:07:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:34.401 12:07:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:34.401 1+0 records in 00:15:34.401 1+0 records out 00:15:34.401 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000420011 s, 9.8 MB/s 00:15:34.401 12:07:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:34.401 12:07:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:34.401 12:07:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:34.401 12:07:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:34.401 12:07:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:34.401 12:07:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:34.401 12:07:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:34.401 12:07:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:34.401 12:07:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:15:34.401 12:07:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:15:34.401 12:07:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:15:34.661 496+0 records in 00:15:34.661 496+0 records out 00:15:34.661 65011712 bytes (65 MB, 62 MiB) copied, 0.372184 s, 175 MB/s 00:15:34.661 12:07:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:34.661 12:07:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:34.661 12:07:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:34.661 12:07:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:34.661 12:07:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:34.661 12:07:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:34.661 12:07:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:34.921 12:07:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:34.921 [2024-11-19 12:07:38.179508] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:34.921 12:07:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:34.921 12:07:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:34.921 12:07:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:34.921 12:07:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:34.921 12:07:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:34.921 12:07:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:34.921 12:07:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:34.921 12:07:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:34.921 12:07:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.921 12:07:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.921 [2024-11-19 12:07:38.195432] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:34.921 12:07:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.921 12:07:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:34.921 12:07:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:34.921 12:07:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:34.921 12:07:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:34.921 12:07:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:34.921 12:07:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:34.921 12:07:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.921 12:07:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.921 12:07:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.921 12:07:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.921 12:07:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.921 12:07:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.921 12:07:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.921 12:07:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.921 12:07:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.921 12:07:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.921 "name": "raid_bdev1", 00:15:34.921 "uuid": "189fa557-68b5-40ef-a02f-9614198e5932", 00:15:34.921 "strip_size_kb": 64, 00:15:34.921 "state": "online", 00:15:34.921 "raid_level": "raid5f", 00:15:34.921 "superblock": true, 00:15:34.921 "num_base_bdevs": 3, 00:15:34.921 "num_base_bdevs_discovered": 2, 00:15:34.921 "num_base_bdevs_operational": 2, 00:15:34.921 "base_bdevs_list": [ 00:15:34.921 { 00:15:34.921 "name": null, 00:15:34.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.921 "is_configured": false, 00:15:34.921 "data_offset": 0, 00:15:34.921 "data_size": 63488 00:15:34.921 }, 00:15:34.921 { 00:15:34.921 "name": "BaseBdev2", 00:15:34.921 "uuid": "a6804593-48ed-5d8f-a722-6b09e84488c1", 00:15:34.921 "is_configured": true, 00:15:34.921 "data_offset": 2048, 00:15:34.921 "data_size": 63488 00:15:34.921 }, 00:15:34.921 { 00:15:34.921 "name": "BaseBdev3", 00:15:34.921 "uuid": "152f4d26-1da0-5376-bfb6-83663ab5af11", 00:15:34.921 "is_configured": true, 00:15:34.921 "data_offset": 2048, 00:15:34.921 "data_size": 63488 00:15:34.921 } 00:15:34.921 ] 00:15:34.921 }' 00:15:34.921 12:07:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.921 12:07:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.490 12:07:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:35.490 12:07:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.490 12:07:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.490 [2024-11-19 12:07:38.646671] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:35.490 [2024-11-19 12:07:38.662713] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:15:35.490 12:07:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.490 12:07:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:35.490 [2024-11-19 12:07:38.670101] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:36.427 12:07:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:36.427 12:07:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:36.427 12:07:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:36.427 12:07:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:36.427 12:07:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:36.427 12:07:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.427 12:07:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.427 12:07:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.427 12:07:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.427 12:07:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.427 12:07:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:36.427 "name": "raid_bdev1", 00:15:36.427 "uuid": "189fa557-68b5-40ef-a02f-9614198e5932", 00:15:36.427 "strip_size_kb": 64, 00:15:36.427 "state": "online", 00:15:36.427 "raid_level": "raid5f", 00:15:36.427 "superblock": true, 00:15:36.427 "num_base_bdevs": 3, 00:15:36.427 "num_base_bdevs_discovered": 3, 00:15:36.427 "num_base_bdevs_operational": 3, 00:15:36.427 "process": { 00:15:36.427 "type": "rebuild", 00:15:36.427 "target": "spare", 00:15:36.427 "progress": { 00:15:36.427 "blocks": 20480, 00:15:36.427 "percent": 16 00:15:36.427 } 00:15:36.427 }, 00:15:36.427 "base_bdevs_list": [ 00:15:36.427 { 00:15:36.427 "name": "spare", 00:15:36.427 "uuid": "abe25379-fefb-562f-af57-6b658689cab8", 00:15:36.427 "is_configured": true, 00:15:36.427 "data_offset": 2048, 00:15:36.427 "data_size": 63488 00:15:36.427 }, 00:15:36.427 { 00:15:36.427 "name": "BaseBdev2", 00:15:36.427 "uuid": "a6804593-48ed-5d8f-a722-6b09e84488c1", 00:15:36.427 "is_configured": true, 00:15:36.427 "data_offset": 2048, 00:15:36.427 "data_size": 63488 00:15:36.427 }, 00:15:36.427 { 00:15:36.427 "name": "BaseBdev3", 00:15:36.427 "uuid": "152f4d26-1da0-5376-bfb6-83663ab5af11", 00:15:36.427 "is_configured": true, 00:15:36.427 "data_offset": 2048, 00:15:36.427 "data_size": 63488 00:15:36.427 } 00:15:36.427 ] 00:15:36.427 }' 00:15:36.427 12:07:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:36.427 12:07:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:36.427 12:07:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:36.687 12:07:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:36.687 12:07:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:36.687 12:07:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.687 12:07:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.687 [2024-11-19 12:07:39.828856] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:36.687 [2024-11-19 12:07:39.878320] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:36.687 [2024-11-19 12:07:39.878374] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:36.687 [2024-11-19 12:07:39.878406] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:36.687 [2024-11-19 12:07:39.878414] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:36.687 12:07:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.687 12:07:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:36.687 12:07:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:36.687 12:07:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:36.687 12:07:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:36.687 12:07:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:36.687 12:07:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:36.687 12:07:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.687 12:07:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.687 12:07:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.687 12:07:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.687 12:07:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.687 12:07:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.687 12:07:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.687 12:07:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.687 12:07:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.687 12:07:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.687 "name": "raid_bdev1", 00:15:36.687 "uuid": "189fa557-68b5-40ef-a02f-9614198e5932", 00:15:36.687 "strip_size_kb": 64, 00:15:36.687 "state": "online", 00:15:36.687 "raid_level": "raid5f", 00:15:36.687 "superblock": true, 00:15:36.687 "num_base_bdevs": 3, 00:15:36.687 "num_base_bdevs_discovered": 2, 00:15:36.687 "num_base_bdevs_operational": 2, 00:15:36.687 "base_bdevs_list": [ 00:15:36.687 { 00:15:36.687 "name": null, 00:15:36.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.687 "is_configured": false, 00:15:36.687 "data_offset": 0, 00:15:36.687 "data_size": 63488 00:15:36.687 }, 00:15:36.687 { 00:15:36.687 "name": "BaseBdev2", 00:15:36.687 "uuid": "a6804593-48ed-5d8f-a722-6b09e84488c1", 00:15:36.687 "is_configured": true, 00:15:36.687 "data_offset": 2048, 00:15:36.687 "data_size": 63488 00:15:36.687 }, 00:15:36.687 { 00:15:36.687 "name": "BaseBdev3", 00:15:36.687 "uuid": "152f4d26-1da0-5376-bfb6-83663ab5af11", 00:15:36.687 "is_configured": true, 00:15:36.687 "data_offset": 2048, 00:15:36.687 "data_size": 63488 00:15:36.687 } 00:15:36.687 ] 00:15:36.687 }' 00:15:36.687 12:07:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.687 12:07:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.257 12:07:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:37.257 12:07:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:37.257 12:07:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:37.257 12:07:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:37.257 12:07:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:37.257 12:07:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.257 12:07:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.257 12:07:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.257 12:07:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.257 12:07:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.257 12:07:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:37.257 "name": "raid_bdev1", 00:15:37.257 "uuid": "189fa557-68b5-40ef-a02f-9614198e5932", 00:15:37.257 "strip_size_kb": 64, 00:15:37.257 "state": "online", 00:15:37.257 "raid_level": "raid5f", 00:15:37.257 "superblock": true, 00:15:37.257 "num_base_bdevs": 3, 00:15:37.257 "num_base_bdevs_discovered": 2, 00:15:37.257 "num_base_bdevs_operational": 2, 00:15:37.257 "base_bdevs_list": [ 00:15:37.257 { 00:15:37.257 "name": null, 00:15:37.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.257 "is_configured": false, 00:15:37.257 "data_offset": 0, 00:15:37.257 "data_size": 63488 00:15:37.257 }, 00:15:37.257 { 00:15:37.257 "name": "BaseBdev2", 00:15:37.257 "uuid": "a6804593-48ed-5d8f-a722-6b09e84488c1", 00:15:37.257 "is_configured": true, 00:15:37.257 "data_offset": 2048, 00:15:37.257 "data_size": 63488 00:15:37.257 }, 00:15:37.257 { 00:15:37.257 "name": "BaseBdev3", 00:15:37.257 "uuid": "152f4d26-1da0-5376-bfb6-83663ab5af11", 00:15:37.257 "is_configured": true, 00:15:37.257 "data_offset": 2048, 00:15:37.257 "data_size": 63488 00:15:37.257 } 00:15:37.257 ] 00:15:37.257 }' 00:15:37.257 12:07:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:37.257 12:07:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:37.257 12:07:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:37.257 12:07:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:37.257 12:07:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:37.257 12:07:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.257 12:07:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.257 [2024-11-19 12:07:40.524074] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:37.257 [2024-11-19 12:07:40.539406] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:15:37.257 12:07:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.257 12:07:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:37.258 [2024-11-19 12:07:40.546418] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:38.196 12:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:38.196 12:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:38.196 12:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:38.196 12:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:38.196 12:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:38.196 12:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.196 12:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.196 12:07:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.196 12:07:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.196 12:07:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.457 12:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:38.457 "name": "raid_bdev1", 00:15:38.457 "uuid": "189fa557-68b5-40ef-a02f-9614198e5932", 00:15:38.457 "strip_size_kb": 64, 00:15:38.457 "state": "online", 00:15:38.457 "raid_level": "raid5f", 00:15:38.457 "superblock": true, 00:15:38.457 "num_base_bdevs": 3, 00:15:38.457 "num_base_bdevs_discovered": 3, 00:15:38.457 "num_base_bdevs_operational": 3, 00:15:38.457 "process": { 00:15:38.457 "type": "rebuild", 00:15:38.457 "target": "spare", 00:15:38.457 "progress": { 00:15:38.457 "blocks": 20480, 00:15:38.457 "percent": 16 00:15:38.457 } 00:15:38.457 }, 00:15:38.457 "base_bdevs_list": [ 00:15:38.457 { 00:15:38.457 "name": "spare", 00:15:38.457 "uuid": "abe25379-fefb-562f-af57-6b658689cab8", 00:15:38.457 "is_configured": true, 00:15:38.457 "data_offset": 2048, 00:15:38.457 "data_size": 63488 00:15:38.457 }, 00:15:38.457 { 00:15:38.457 "name": "BaseBdev2", 00:15:38.457 "uuid": "a6804593-48ed-5d8f-a722-6b09e84488c1", 00:15:38.457 "is_configured": true, 00:15:38.457 "data_offset": 2048, 00:15:38.457 "data_size": 63488 00:15:38.457 }, 00:15:38.457 { 00:15:38.457 "name": "BaseBdev3", 00:15:38.457 "uuid": "152f4d26-1da0-5376-bfb6-83663ab5af11", 00:15:38.457 "is_configured": true, 00:15:38.457 "data_offset": 2048, 00:15:38.457 "data_size": 63488 00:15:38.457 } 00:15:38.457 ] 00:15:38.457 }' 00:15:38.457 12:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:38.457 12:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:38.457 12:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:38.457 12:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:38.457 12:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:38.457 12:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:38.457 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:38.457 12:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:15:38.457 12:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:38.457 12:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=554 00:15:38.457 12:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:38.457 12:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:38.457 12:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:38.457 12:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:38.457 12:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:38.457 12:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:38.457 12:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.457 12:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.457 12:07:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.457 12:07:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.457 12:07:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.457 12:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:38.457 "name": "raid_bdev1", 00:15:38.457 "uuid": "189fa557-68b5-40ef-a02f-9614198e5932", 00:15:38.457 "strip_size_kb": 64, 00:15:38.457 "state": "online", 00:15:38.457 "raid_level": "raid5f", 00:15:38.457 "superblock": true, 00:15:38.457 "num_base_bdevs": 3, 00:15:38.457 "num_base_bdevs_discovered": 3, 00:15:38.457 "num_base_bdevs_operational": 3, 00:15:38.457 "process": { 00:15:38.457 "type": "rebuild", 00:15:38.457 "target": "spare", 00:15:38.457 "progress": { 00:15:38.457 "blocks": 22528, 00:15:38.457 "percent": 17 00:15:38.457 } 00:15:38.457 }, 00:15:38.457 "base_bdevs_list": [ 00:15:38.457 { 00:15:38.457 "name": "spare", 00:15:38.457 "uuid": "abe25379-fefb-562f-af57-6b658689cab8", 00:15:38.457 "is_configured": true, 00:15:38.457 "data_offset": 2048, 00:15:38.457 "data_size": 63488 00:15:38.457 }, 00:15:38.457 { 00:15:38.457 "name": "BaseBdev2", 00:15:38.457 "uuid": "a6804593-48ed-5d8f-a722-6b09e84488c1", 00:15:38.457 "is_configured": true, 00:15:38.457 "data_offset": 2048, 00:15:38.457 "data_size": 63488 00:15:38.457 }, 00:15:38.457 { 00:15:38.457 "name": "BaseBdev3", 00:15:38.457 "uuid": "152f4d26-1da0-5376-bfb6-83663ab5af11", 00:15:38.457 "is_configured": true, 00:15:38.457 "data_offset": 2048, 00:15:38.457 "data_size": 63488 00:15:38.457 } 00:15:38.457 ] 00:15:38.457 }' 00:15:38.457 12:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:38.457 12:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:38.457 12:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:38.717 12:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:38.717 12:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:39.671 12:07:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:39.671 12:07:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:39.671 12:07:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:39.671 12:07:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:39.671 12:07:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:39.671 12:07:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:39.671 12:07:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.671 12:07:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.671 12:07:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.671 12:07:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.671 12:07:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.671 12:07:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:39.671 "name": "raid_bdev1", 00:15:39.671 "uuid": "189fa557-68b5-40ef-a02f-9614198e5932", 00:15:39.671 "strip_size_kb": 64, 00:15:39.671 "state": "online", 00:15:39.671 "raid_level": "raid5f", 00:15:39.671 "superblock": true, 00:15:39.671 "num_base_bdevs": 3, 00:15:39.671 "num_base_bdevs_discovered": 3, 00:15:39.671 "num_base_bdevs_operational": 3, 00:15:39.671 "process": { 00:15:39.671 "type": "rebuild", 00:15:39.671 "target": "spare", 00:15:39.671 "progress": { 00:15:39.671 "blocks": 45056, 00:15:39.671 "percent": 35 00:15:39.671 } 00:15:39.671 }, 00:15:39.671 "base_bdevs_list": [ 00:15:39.671 { 00:15:39.671 "name": "spare", 00:15:39.671 "uuid": "abe25379-fefb-562f-af57-6b658689cab8", 00:15:39.671 "is_configured": true, 00:15:39.671 "data_offset": 2048, 00:15:39.671 "data_size": 63488 00:15:39.671 }, 00:15:39.671 { 00:15:39.671 "name": "BaseBdev2", 00:15:39.671 "uuid": "a6804593-48ed-5d8f-a722-6b09e84488c1", 00:15:39.671 "is_configured": true, 00:15:39.671 "data_offset": 2048, 00:15:39.671 "data_size": 63488 00:15:39.671 }, 00:15:39.671 { 00:15:39.671 "name": "BaseBdev3", 00:15:39.671 "uuid": "152f4d26-1da0-5376-bfb6-83663ab5af11", 00:15:39.671 "is_configured": true, 00:15:39.671 "data_offset": 2048, 00:15:39.671 "data_size": 63488 00:15:39.671 } 00:15:39.671 ] 00:15:39.671 }' 00:15:39.671 12:07:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:39.671 12:07:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:39.671 12:07:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:39.671 12:07:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:39.671 12:07:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:41.049 12:07:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:41.049 12:07:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:41.049 12:07:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:41.049 12:07:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:41.049 12:07:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:41.049 12:07:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:41.049 12:07:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.049 12:07:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.049 12:07:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.049 12:07:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.049 12:07:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.049 12:07:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:41.049 "name": "raid_bdev1", 00:15:41.049 "uuid": "189fa557-68b5-40ef-a02f-9614198e5932", 00:15:41.049 "strip_size_kb": 64, 00:15:41.049 "state": "online", 00:15:41.049 "raid_level": "raid5f", 00:15:41.049 "superblock": true, 00:15:41.049 "num_base_bdevs": 3, 00:15:41.049 "num_base_bdevs_discovered": 3, 00:15:41.049 "num_base_bdevs_operational": 3, 00:15:41.049 "process": { 00:15:41.049 "type": "rebuild", 00:15:41.049 "target": "spare", 00:15:41.049 "progress": { 00:15:41.049 "blocks": 69632, 00:15:41.049 "percent": 54 00:15:41.049 } 00:15:41.049 }, 00:15:41.049 "base_bdevs_list": [ 00:15:41.049 { 00:15:41.049 "name": "spare", 00:15:41.049 "uuid": "abe25379-fefb-562f-af57-6b658689cab8", 00:15:41.049 "is_configured": true, 00:15:41.049 "data_offset": 2048, 00:15:41.049 "data_size": 63488 00:15:41.049 }, 00:15:41.049 { 00:15:41.049 "name": "BaseBdev2", 00:15:41.049 "uuid": "a6804593-48ed-5d8f-a722-6b09e84488c1", 00:15:41.049 "is_configured": true, 00:15:41.049 "data_offset": 2048, 00:15:41.049 "data_size": 63488 00:15:41.049 }, 00:15:41.049 { 00:15:41.049 "name": "BaseBdev3", 00:15:41.049 "uuid": "152f4d26-1da0-5376-bfb6-83663ab5af11", 00:15:41.049 "is_configured": true, 00:15:41.049 "data_offset": 2048, 00:15:41.049 "data_size": 63488 00:15:41.049 } 00:15:41.049 ] 00:15:41.049 }' 00:15:41.049 12:07:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:41.050 12:07:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:41.050 12:07:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:41.050 12:07:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:41.050 12:07:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:41.990 12:07:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:41.990 12:07:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:41.990 12:07:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:41.990 12:07:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:41.990 12:07:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:41.990 12:07:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:41.990 12:07:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.990 12:07:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.990 12:07:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.990 12:07:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.990 12:07:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.990 12:07:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:41.990 "name": "raid_bdev1", 00:15:41.990 "uuid": "189fa557-68b5-40ef-a02f-9614198e5932", 00:15:41.990 "strip_size_kb": 64, 00:15:41.990 "state": "online", 00:15:41.990 "raid_level": "raid5f", 00:15:41.990 "superblock": true, 00:15:41.990 "num_base_bdevs": 3, 00:15:41.990 "num_base_bdevs_discovered": 3, 00:15:41.990 "num_base_bdevs_operational": 3, 00:15:41.990 "process": { 00:15:41.990 "type": "rebuild", 00:15:41.990 "target": "spare", 00:15:41.990 "progress": { 00:15:41.990 "blocks": 92160, 00:15:41.990 "percent": 72 00:15:41.990 } 00:15:41.990 }, 00:15:41.990 "base_bdevs_list": [ 00:15:41.990 { 00:15:41.990 "name": "spare", 00:15:41.990 "uuid": "abe25379-fefb-562f-af57-6b658689cab8", 00:15:41.990 "is_configured": true, 00:15:41.990 "data_offset": 2048, 00:15:41.990 "data_size": 63488 00:15:41.990 }, 00:15:41.990 { 00:15:41.990 "name": "BaseBdev2", 00:15:41.990 "uuid": "a6804593-48ed-5d8f-a722-6b09e84488c1", 00:15:41.990 "is_configured": true, 00:15:41.990 "data_offset": 2048, 00:15:41.990 "data_size": 63488 00:15:41.990 }, 00:15:41.990 { 00:15:41.990 "name": "BaseBdev3", 00:15:41.990 "uuid": "152f4d26-1da0-5376-bfb6-83663ab5af11", 00:15:41.990 "is_configured": true, 00:15:41.990 "data_offset": 2048, 00:15:41.990 "data_size": 63488 00:15:41.990 } 00:15:41.990 ] 00:15:41.990 }' 00:15:41.990 12:07:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:41.990 12:07:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:41.990 12:07:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:41.990 12:07:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:41.990 12:07:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:42.929 12:07:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:42.929 12:07:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:42.929 12:07:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:42.929 12:07:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:42.929 12:07:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:42.929 12:07:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:42.929 12:07:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.929 12:07:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.929 12:07:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.188 12:07:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.188 12:07:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.188 12:07:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:43.188 "name": "raid_bdev1", 00:15:43.188 "uuid": "189fa557-68b5-40ef-a02f-9614198e5932", 00:15:43.188 "strip_size_kb": 64, 00:15:43.189 "state": "online", 00:15:43.189 "raid_level": "raid5f", 00:15:43.189 "superblock": true, 00:15:43.189 "num_base_bdevs": 3, 00:15:43.189 "num_base_bdevs_discovered": 3, 00:15:43.189 "num_base_bdevs_operational": 3, 00:15:43.189 "process": { 00:15:43.189 "type": "rebuild", 00:15:43.189 "target": "spare", 00:15:43.189 "progress": { 00:15:43.189 "blocks": 116736, 00:15:43.189 "percent": 91 00:15:43.189 } 00:15:43.189 }, 00:15:43.189 "base_bdevs_list": [ 00:15:43.189 { 00:15:43.189 "name": "spare", 00:15:43.189 "uuid": "abe25379-fefb-562f-af57-6b658689cab8", 00:15:43.189 "is_configured": true, 00:15:43.189 "data_offset": 2048, 00:15:43.189 "data_size": 63488 00:15:43.189 }, 00:15:43.189 { 00:15:43.189 "name": "BaseBdev2", 00:15:43.189 "uuid": "a6804593-48ed-5d8f-a722-6b09e84488c1", 00:15:43.189 "is_configured": true, 00:15:43.189 "data_offset": 2048, 00:15:43.189 "data_size": 63488 00:15:43.189 }, 00:15:43.189 { 00:15:43.189 "name": "BaseBdev3", 00:15:43.189 "uuid": "152f4d26-1da0-5376-bfb6-83663ab5af11", 00:15:43.189 "is_configured": true, 00:15:43.189 "data_offset": 2048, 00:15:43.189 "data_size": 63488 00:15:43.189 } 00:15:43.189 ] 00:15:43.189 }' 00:15:43.189 12:07:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:43.189 12:07:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:43.189 12:07:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:43.189 12:07:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:43.189 12:07:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:43.448 [2024-11-19 12:07:46.783588] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:43.449 [2024-11-19 12:07:46.783686] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:43.449 [2024-11-19 12:07:46.783791] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:44.387 12:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:44.387 12:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:44.387 12:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:44.387 12:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:44.387 12:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:44.387 12:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:44.387 12:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.387 12:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.387 12:07:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.387 12:07:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.387 12:07:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.387 12:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:44.387 "name": "raid_bdev1", 00:15:44.387 "uuid": "189fa557-68b5-40ef-a02f-9614198e5932", 00:15:44.387 "strip_size_kb": 64, 00:15:44.387 "state": "online", 00:15:44.387 "raid_level": "raid5f", 00:15:44.387 "superblock": true, 00:15:44.387 "num_base_bdevs": 3, 00:15:44.387 "num_base_bdevs_discovered": 3, 00:15:44.387 "num_base_bdevs_operational": 3, 00:15:44.387 "base_bdevs_list": [ 00:15:44.387 { 00:15:44.387 "name": "spare", 00:15:44.387 "uuid": "abe25379-fefb-562f-af57-6b658689cab8", 00:15:44.387 "is_configured": true, 00:15:44.387 "data_offset": 2048, 00:15:44.387 "data_size": 63488 00:15:44.387 }, 00:15:44.387 { 00:15:44.387 "name": "BaseBdev2", 00:15:44.387 "uuid": "a6804593-48ed-5d8f-a722-6b09e84488c1", 00:15:44.387 "is_configured": true, 00:15:44.387 "data_offset": 2048, 00:15:44.387 "data_size": 63488 00:15:44.387 }, 00:15:44.387 { 00:15:44.387 "name": "BaseBdev3", 00:15:44.387 "uuid": "152f4d26-1da0-5376-bfb6-83663ab5af11", 00:15:44.387 "is_configured": true, 00:15:44.387 "data_offset": 2048, 00:15:44.387 "data_size": 63488 00:15:44.387 } 00:15:44.387 ] 00:15:44.387 }' 00:15:44.387 12:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:44.387 12:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:44.387 12:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:44.387 12:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:44.387 12:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:15:44.387 12:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:44.387 12:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:44.387 12:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:44.387 12:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:44.387 12:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:44.387 12:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.387 12:07:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.387 12:07:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.387 12:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.387 12:07:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.387 12:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:44.387 "name": "raid_bdev1", 00:15:44.387 "uuid": "189fa557-68b5-40ef-a02f-9614198e5932", 00:15:44.387 "strip_size_kb": 64, 00:15:44.387 "state": "online", 00:15:44.387 "raid_level": "raid5f", 00:15:44.387 "superblock": true, 00:15:44.387 "num_base_bdevs": 3, 00:15:44.387 "num_base_bdevs_discovered": 3, 00:15:44.387 "num_base_bdevs_operational": 3, 00:15:44.387 "base_bdevs_list": [ 00:15:44.387 { 00:15:44.387 "name": "spare", 00:15:44.387 "uuid": "abe25379-fefb-562f-af57-6b658689cab8", 00:15:44.387 "is_configured": true, 00:15:44.387 "data_offset": 2048, 00:15:44.387 "data_size": 63488 00:15:44.387 }, 00:15:44.387 { 00:15:44.387 "name": "BaseBdev2", 00:15:44.387 "uuid": "a6804593-48ed-5d8f-a722-6b09e84488c1", 00:15:44.387 "is_configured": true, 00:15:44.387 "data_offset": 2048, 00:15:44.387 "data_size": 63488 00:15:44.387 }, 00:15:44.387 { 00:15:44.387 "name": "BaseBdev3", 00:15:44.387 "uuid": "152f4d26-1da0-5376-bfb6-83663ab5af11", 00:15:44.387 "is_configured": true, 00:15:44.387 "data_offset": 2048, 00:15:44.387 "data_size": 63488 00:15:44.387 } 00:15:44.387 ] 00:15:44.387 }' 00:15:44.387 12:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:44.387 12:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:44.387 12:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:44.387 12:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:44.387 12:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:44.387 12:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:44.387 12:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:44.387 12:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:44.387 12:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:44.387 12:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:44.387 12:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.387 12:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.387 12:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.387 12:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.387 12:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.387 12:07:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.387 12:07:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.387 12:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.387 12:07:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.645 12:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.645 "name": "raid_bdev1", 00:15:44.645 "uuid": "189fa557-68b5-40ef-a02f-9614198e5932", 00:15:44.645 "strip_size_kb": 64, 00:15:44.645 "state": "online", 00:15:44.645 "raid_level": "raid5f", 00:15:44.645 "superblock": true, 00:15:44.645 "num_base_bdevs": 3, 00:15:44.645 "num_base_bdevs_discovered": 3, 00:15:44.645 "num_base_bdevs_operational": 3, 00:15:44.645 "base_bdevs_list": [ 00:15:44.645 { 00:15:44.645 "name": "spare", 00:15:44.645 "uuid": "abe25379-fefb-562f-af57-6b658689cab8", 00:15:44.645 "is_configured": true, 00:15:44.645 "data_offset": 2048, 00:15:44.645 "data_size": 63488 00:15:44.645 }, 00:15:44.645 { 00:15:44.645 "name": "BaseBdev2", 00:15:44.645 "uuid": "a6804593-48ed-5d8f-a722-6b09e84488c1", 00:15:44.645 "is_configured": true, 00:15:44.645 "data_offset": 2048, 00:15:44.645 "data_size": 63488 00:15:44.645 }, 00:15:44.645 { 00:15:44.645 "name": "BaseBdev3", 00:15:44.645 "uuid": "152f4d26-1da0-5376-bfb6-83663ab5af11", 00:15:44.645 "is_configured": true, 00:15:44.645 "data_offset": 2048, 00:15:44.645 "data_size": 63488 00:15:44.645 } 00:15:44.645 ] 00:15:44.645 }' 00:15:44.645 12:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.645 12:07:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.904 12:07:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:44.904 12:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.904 12:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.904 [2024-11-19 12:07:48.188667] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:44.904 [2024-11-19 12:07:48.188702] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:44.904 [2024-11-19 12:07:48.188791] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:44.904 [2024-11-19 12:07:48.188869] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:44.904 [2024-11-19 12:07:48.188891] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:44.904 12:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.904 12:07:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.904 12:07:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:15:44.904 12:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.904 12:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.904 12:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.904 12:07:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:44.904 12:07:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:44.904 12:07:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:44.904 12:07:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:44.904 12:07:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:44.904 12:07:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:44.904 12:07:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:44.904 12:07:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:44.904 12:07:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:44.904 12:07:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:44.904 12:07:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:44.904 12:07:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:44.904 12:07:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:45.164 /dev/nbd0 00:15:45.164 12:07:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:45.164 12:07:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:45.164 12:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:45.164 12:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:45.164 12:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:45.164 12:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:45.164 12:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:45.164 12:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:45.164 12:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:45.164 12:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:45.164 12:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:45.164 1+0 records in 00:15:45.164 1+0 records out 00:15:45.164 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000378438 s, 10.8 MB/s 00:15:45.164 12:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:45.164 12:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:45.164 12:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:45.165 12:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:45.165 12:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:45.165 12:07:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:45.165 12:07:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:45.165 12:07:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:45.425 /dev/nbd1 00:15:45.425 12:07:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:45.425 12:07:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:45.425 12:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:45.425 12:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:45.425 12:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:45.425 12:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:45.425 12:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:45.425 12:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:45.425 12:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:45.425 12:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:45.425 12:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:45.425 1+0 records in 00:15:45.425 1+0 records out 00:15:45.425 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000436434 s, 9.4 MB/s 00:15:45.425 12:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:45.425 12:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:45.425 12:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:45.425 12:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:45.425 12:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:45.425 12:07:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:45.425 12:07:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:45.425 12:07:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:45.684 12:07:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:45.684 12:07:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:45.684 12:07:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:45.684 12:07:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:45.684 12:07:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:45.684 12:07:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:45.684 12:07:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:45.944 12:07:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:45.944 12:07:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:45.944 12:07:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:45.944 12:07:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:45.944 12:07:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:45.944 12:07:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:45.944 12:07:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:45.944 12:07:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:45.944 12:07:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:45.944 12:07:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:45.944 12:07:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:46.204 12:07:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:46.204 12:07:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:46.204 12:07:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:46.204 12:07:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:46.204 12:07:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:46.204 12:07:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:46.204 12:07:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:46.204 12:07:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:46.204 12:07:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:46.204 12:07:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.204 12:07:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.204 12:07:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.204 12:07:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:46.204 12:07:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.204 12:07:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.204 [2024-11-19 12:07:49.343807] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:46.204 [2024-11-19 12:07:49.343864] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:46.204 [2024-11-19 12:07:49.343883] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:46.204 [2024-11-19 12:07:49.343894] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:46.204 [2024-11-19 12:07:49.346328] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:46.204 [2024-11-19 12:07:49.346365] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:46.204 [2024-11-19 12:07:49.346454] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:46.204 [2024-11-19 12:07:49.346523] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:46.204 [2024-11-19 12:07:49.346650] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:46.204 [2024-11-19 12:07:49.346763] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:46.204 spare 00:15:46.204 12:07:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.204 12:07:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:46.204 12:07:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.204 12:07:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.204 [2024-11-19 12:07:49.446664] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:46.204 [2024-11-19 12:07:49.446695] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:46.204 [2024-11-19 12:07:49.446966] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:15:46.204 [2024-11-19 12:07:49.452127] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:46.204 [2024-11-19 12:07:49.452151] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:46.204 [2024-11-19 12:07:49.452336] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:46.204 12:07:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.204 12:07:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:46.204 12:07:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:46.204 12:07:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:46.204 12:07:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:46.204 12:07:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:46.204 12:07:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:46.204 12:07:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.204 12:07:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.204 12:07:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.204 12:07:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.204 12:07:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.204 12:07:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.204 12:07:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.204 12:07:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.204 12:07:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.204 12:07:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.204 "name": "raid_bdev1", 00:15:46.204 "uuid": "189fa557-68b5-40ef-a02f-9614198e5932", 00:15:46.204 "strip_size_kb": 64, 00:15:46.204 "state": "online", 00:15:46.204 "raid_level": "raid5f", 00:15:46.204 "superblock": true, 00:15:46.204 "num_base_bdevs": 3, 00:15:46.204 "num_base_bdevs_discovered": 3, 00:15:46.204 "num_base_bdevs_operational": 3, 00:15:46.204 "base_bdevs_list": [ 00:15:46.204 { 00:15:46.204 "name": "spare", 00:15:46.204 "uuid": "abe25379-fefb-562f-af57-6b658689cab8", 00:15:46.204 "is_configured": true, 00:15:46.204 "data_offset": 2048, 00:15:46.204 "data_size": 63488 00:15:46.204 }, 00:15:46.204 { 00:15:46.204 "name": "BaseBdev2", 00:15:46.204 "uuid": "a6804593-48ed-5d8f-a722-6b09e84488c1", 00:15:46.204 "is_configured": true, 00:15:46.204 "data_offset": 2048, 00:15:46.204 "data_size": 63488 00:15:46.204 }, 00:15:46.204 { 00:15:46.204 "name": "BaseBdev3", 00:15:46.204 "uuid": "152f4d26-1da0-5376-bfb6-83663ab5af11", 00:15:46.204 "is_configured": true, 00:15:46.204 "data_offset": 2048, 00:15:46.204 "data_size": 63488 00:15:46.204 } 00:15:46.204 ] 00:15:46.204 }' 00:15:46.204 12:07:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.204 12:07:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.774 12:07:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:46.774 12:07:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:46.774 12:07:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:46.774 12:07:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:46.774 12:07:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:46.774 12:07:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.774 12:07:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.774 12:07:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.774 12:07:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.774 12:07:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.774 12:07:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:46.774 "name": "raid_bdev1", 00:15:46.774 "uuid": "189fa557-68b5-40ef-a02f-9614198e5932", 00:15:46.774 "strip_size_kb": 64, 00:15:46.774 "state": "online", 00:15:46.774 "raid_level": "raid5f", 00:15:46.774 "superblock": true, 00:15:46.774 "num_base_bdevs": 3, 00:15:46.774 "num_base_bdevs_discovered": 3, 00:15:46.774 "num_base_bdevs_operational": 3, 00:15:46.774 "base_bdevs_list": [ 00:15:46.774 { 00:15:46.774 "name": "spare", 00:15:46.774 "uuid": "abe25379-fefb-562f-af57-6b658689cab8", 00:15:46.774 "is_configured": true, 00:15:46.774 "data_offset": 2048, 00:15:46.774 "data_size": 63488 00:15:46.774 }, 00:15:46.774 { 00:15:46.774 "name": "BaseBdev2", 00:15:46.774 "uuid": "a6804593-48ed-5d8f-a722-6b09e84488c1", 00:15:46.774 "is_configured": true, 00:15:46.774 "data_offset": 2048, 00:15:46.774 "data_size": 63488 00:15:46.774 }, 00:15:46.774 { 00:15:46.774 "name": "BaseBdev3", 00:15:46.774 "uuid": "152f4d26-1da0-5376-bfb6-83663ab5af11", 00:15:46.774 "is_configured": true, 00:15:46.774 "data_offset": 2048, 00:15:46.774 "data_size": 63488 00:15:46.774 } 00:15:46.774 ] 00:15:46.774 }' 00:15:46.774 12:07:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:46.774 12:07:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:46.774 12:07:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:46.774 12:07:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:46.774 12:07:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.774 12:07:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:46.774 12:07:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.774 12:07:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.774 12:07:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.774 12:07:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:46.774 12:07:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:46.774 12:07:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.774 12:07:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.774 [2024-11-19 12:07:50.077452] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:46.774 12:07:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.774 12:07:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:46.774 12:07:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:46.774 12:07:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:46.774 12:07:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:46.774 12:07:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:46.774 12:07:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:46.774 12:07:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.774 12:07:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.774 12:07:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.774 12:07:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.774 12:07:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.774 12:07:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.774 12:07:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.774 12:07:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.774 12:07:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.774 12:07:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.774 "name": "raid_bdev1", 00:15:46.774 "uuid": "189fa557-68b5-40ef-a02f-9614198e5932", 00:15:46.774 "strip_size_kb": 64, 00:15:46.774 "state": "online", 00:15:46.774 "raid_level": "raid5f", 00:15:46.774 "superblock": true, 00:15:46.774 "num_base_bdevs": 3, 00:15:46.774 "num_base_bdevs_discovered": 2, 00:15:46.774 "num_base_bdevs_operational": 2, 00:15:46.774 "base_bdevs_list": [ 00:15:46.774 { 00:15:46.774 "name": null, 00:15:46.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.774 "is_configured": false, 00:15:46.774 "data_offset": 0, 00:15:46.774 "data_size": 63488 00:15:46.775 }, 00:15:46.775 { 00:15:46.775 "name": "BaseBdev2", 00:15:46.775 "uuid": "a6804593-48ed-5d8f-a722-6b09e84488c1", 00:15:46.775 "is_configured": true, 00:15:46.775 "data_offset": 2048, 00:15:46.775 "data_size": 63488 00:15:46.775 }, 00:15:46.775 { 00:15:46.775 "name": "BaseBdev3", 00:15:46.775 "uuid": "152f4d26-1da0-5376-bfb6-83663ab5af11", 00:15:46.775 "is_configured": true, 00:15:46.775 "data_offset": 2048, 00:15:46.775 "data_size": 63488 00:15:46.775 } 00:15:46.775 ] 00:15:46.775 }' 00:15:46.775 12:07:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.775 12:07:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.344 12:07:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:47.344 12:07:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.344 12:07:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.344 [2024-11-19 12:07:50.512781] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:47.345 [2024-11-19 12:07:50.512976] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:47.345 [2024-11-19 12:07:50.513014] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:47.345 [2024-11-19 12:07:50.513047] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:47.345 [2024-11-19 12:07:50.528422] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:15:47.345 12:07:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.345 12:07:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:47.345 [2024-11-19 12:07:50.535384] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:48.283 12:07:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:48.283 12:07:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:48.283 12:07:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:48.283 12:07:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:48.283 12:07:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:48.283 12:07:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.283 12:07:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.283 12:07:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.283 12:07:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.283 12:07:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.283 12:07:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:48.283 "name": "raid_bdev1", 00:15:48.283 "uuid": "189fa557-68b5-40ef-a02f-9614198e5932", 00:15:48.283 "strip_size_kb": 64, 00:15:48.283 "state": "online", 00:15:48.283 "raid_level": "raid5f", 00:15:48.283 "superblock": true, 00:15:48.283 "num_base_bdevs": 3, 00:15:48.283 "num_base_bdevs_discovered": 3, 00:15:48.283 "num_base_bdevs_operational": 3, 00:15:48.283 "process": { 00:15:48.283 "type": "rebuild", 00:15:48.283 "target": "spare", 00:15:48.283 "progress": { 00:15:48.283 "blocks": 20480, 00:15:48.283 "percent": 16 00:15:48.283 } 00:15:48.283 }, 00:15:48.283 "base_bdevs_list": [ 00:15:48.283 { 00:15:48.283 "name": "spare", 00:15:48.283 "uuid": "abe25379-fefb-562f-af57-6b658689cab8", 00:15:48.283 "is_configured": true, 00:15:48.283 "data_offset": 2048, 00:15:48.283 "data_size": 63488 00:15:48.283 }, 00:15:48.283 { 00:15:48.283 "name": "BaseBdev2", 00:15:48.283 "uuid": "a6804593-48ed-5d8f-a722-6b09e84488c1", 00:15:48.283 "is_configured": true, 00:15:48.283 "data_offset": 2048, 00:15:48.283 "data_size": 63488 00:15:48.283 }, 00:15:48.283 { 00:15:48.283 "name": "BaseBdev3", 00:15:48.283 "uuid": "152f4d26-1da0-5376-bfb6-83663ab5af11", 00:15:48.283 "is_configured": true, 00:15:48.283 "data_offset": 2048, 00:15:48.283 "data_size": 63488 00:15:48.283 } 00:15:48.283 ] 00:15:48.283 }' 00:15:48.283 12:07:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:48.283 12:07:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:48.283 12:07:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:48.543 12:07:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:48.543 12:07:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:48.543 12:07:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.543 12:07:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.543 [2024-11-19 12:07:51.686453] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:48.543 [2024-11-19 12:07:51.743027] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:48.543 [2024-11-19 12:07:51.743094] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:48.543 [2024-11-19 12:07:51.743109] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:48.543 [2024-11-19 12:07:51.743118] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:48.543 12:07:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.543 12:07:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:48.543 12:07:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:48.543 12:07:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:48.543 12:07:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:48.543 12:07:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:48.543 12:07:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:48.543 12:07:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.543 12:07:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.543 12:07:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.543 12:07:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.543 12:07:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.543 12:07:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.543 12:07:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.543 12:07:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.543 12:07:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.543 12:07:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.543 "name": "raid_bdev1", 00:15:48.543 "uuid": "189fa557-68b5-40ef-a02f-9614198e5932", 00:15:48.543 "strip_size_kb": 64, 00:15:48.543 "state": "online", 00:15:48.543 "raid_level": "raid5f", 00:15:48.543 "superblock": true, 00:15:48.543 "num_base_bdevs": 3, 00:15:48.543 "num_base_bdevs_discovered": 2, 00:15:48.543 "num_base_bdevs_operational": 2, 00:15:48.543 "base_bdevs_list": [ 00:15:48.543 { 00:15:48.543 "name": null, 00:15:48.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.543 "is_configured": false, 00:15:48.543 "data_offset": 0, 00:15:48.543 "data_size": 63488 00:15:48.543 }, 00:15:48.543 { 00:15:48.543 "name": "BaseBdev2", 00:15:48.543 "uuid": "a6804593-48ed-5d8f-a722-6b09e84488c1", 00:15:48.543 "is_configured": true, 00:15:48.543 "data_offset": 2048, 00:15:48.543 "data_size": 63488 00:15:48.543 }, 00:15:48.543 { 00:15:48.543 "name": "BaseBdev3", 00:15:48.543 "uuid": "152f4d26-1da0-5376-bfb6-83663ab5af11", 00:15:48.543 "is_configured": true, 00:15:48.543 "data_offset": 2048, 00:15:48.543 "data_size": 63488 00:15:48.543 } 00:15:48.543 ] 00:15:48.543 }' 00:15:48.543 12:07:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.543 12:07:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.112 12:07:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:49.112 12:07:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.112 12:07:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.112 [2024-11-19 12:07:52.212806] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:49.112 [2024-11-19 12:07:52.212868] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:49.112 [2024-11-19 12:07:52.212889] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:15:49.112 [2024-11-19 12:07:52.212902] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:49.112 [2024-11-19 12:07:52.213394] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:49.112 [2024-11-19 12:07:52.213417] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:49.112 [2024-11-19 12:07:52.213512] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:49.112 [2024-11-19 12:07:52.213526] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:49.112 [2024-11-19 12:07:52.213537] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:49.112 [2024-11-19 12:07:52.213559] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:49.112 [2024-11-19 12:07:52.228716] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:15:49.112 spare 00:15:49.112 12:07:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.112 12:07:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:49.112 [2024-11-19 12:07:52.235733] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:50.051 12:07:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:50.051 12:07:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:50.051 12:07:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:50.051 12:07:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:50.051 12:07:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:50.051 12:07:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.051 12:07:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.051 12:07:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.051 12:07:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.051 12:07:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.051 12:07:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:50.051 "name": "raid_bdev1", 00:15:50.051 "uuid": "189fa557-68b5-40ef-a02f-9614198e5932", 00:15:50.051 "strip_size_kb": 64, 00:15:50.051 "state": "online", 00:15:50.051 "raid_level": "raid5f", 00:15:50.051 "superblock": true, 00:15:50.051 "num_base_bdevs": 3, 00:15:50.051 "num_base_bdevs_discovered": 3, 00:15:50.051 "num_base_bdevs_operational": 3, 00:15:50.051 "process": { 00:15:50.051 "type": "rebuild", 00:15:50.051 "target": "spare", 00:15:50.051 "progress": { 00:15:50.051 "blocks": 20480, 00:15:50.051 "percent": 16 00:15:50.051 } 00:15:50.051 }, 00:15:50.051 "base_bdevs_list": [ 00:15:50.051 { 00:15:50.052 "name": "spare", 00:15:50.052 "uuid": "abe25379-fefb-562f-af57-6b658689cab8", 00:15:50.052 "is_configured": true, 00:15:50.052 "data_offset": 2048, 00:15:50.052 "data_size": 63488 00:15:50.052 }, 00:15:50.052 { 00:15:50.052 "name": "BaseBdev2", 00:15:50.052 "uuid": "a6804593-48ed-5d8f-a722-6b09e84488c1", 00:15:50.052 "is_configured": true, 00:15:50.052 "data_offset": 2048, 00:15:50.052 "data_size": 63488 00:15:50.052 }, 00:15:50.052 { 00:15:50.052 "name": "BaseBdev3", 00:15:50.052 "uuid": "152f4d26-1da0-5376-bfb6-83663ab5af11", 00:15:50.052 "is_configured": true, 00:15:50.052 "data_offset": 2048, 00:15:50.052 "data_size": 63488 00:15:50.052 } 00:15:50.052 ] 00:15:50.052 }' 00:15:50.052 12:07:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:50.052 12:07:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:50.052 12:07:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:50.052 12:07:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:50.052 12:07:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:50.052 12:07:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.052 12:07:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.052 [2024-11-19 12:07:53.346766] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:50.311 [2024-11-19 12:07:53.443386] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:50.311 [2024-11-19 12:07:53.443443] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:50.311 [2024-11-19 12:07:53.443460] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:50.311 [2024-11-19 12:07:53.443468] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:50.311 12:07:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.311 12:07:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:50.311 12:07:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:50.311 12:07:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:50.311 12:07:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:50.311 12:07:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:50.311 12:07:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:50.311 12:07:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.311 12:07:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.311 12:07:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.311 12:07:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.311 12:07:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.311 12:07:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.311 12:07:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.311 12:07:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.311 12:07:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.311 12:07:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.311 "name": "raid_bdev1", 00:15:50.311 "uuid": "189fa557-68b5-40ef-a02f-9614198e5932", 00:15:50.311 "strip_size_kb": 64, 00:15:50.311 "state": "online", 00:15:50.311 "raid_level": "raid5f", 00:15:50.311 "superblock": true, 00:15:50.311 "num_base_bdevs": 3, 00:15:50.311 "num_base_bdevs_discovered": 2, 00:15:50.311 "num_base_bdevs_operational": 2, 00:15:50.312 "base_bdevs_list": [ 00:15:50.312 { 00:15:50.312 "name": null, 00:15:50.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.312 "is_configured": false, 00:15:50.312 "data_offset": 0, 00:15:50.312 "data_size": 63488 00:15:50.312 }, 00:15:50.312 { 00:15:50.312 "name": "BaseBdev2", 00:15:50.312 "uuid": "a6804593-48ed-5d8f-a722-6b09e84488c1", 00:15:50.312 "is_configured": true, 00:15:50.312 "data_offset": 2048, 00:15:50.312 "data_size": 63488 00:15:50.312 }, 00:15:50.312 { 00:15:50.312 "name": "BaseBdev3", 00:15:50.312 "uuid": "152f4d26-1da0-5376-bfb6-83663ab5af11", 00:15:50.312 "is_configured": true, 00:15:50.312 "data_offset": 2048, 00:15:50.312 "data_size": 63488 00:15:50.312 } 00:15:50.312 ] 00:15:50.312 }' 00:15:50.312 12:07:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.312 12:07:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.571 12:07:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:50.571 12:07:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:50.571 12:07:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:50.571 12:07:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:50.571 12:07:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:50.571 12:07:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.571 12:07:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.571 12:07:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.571 12:07:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.571 12:07:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.830 12:07:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:50.830 "name": "raid_bdev1", 00:15:50.830 "uuid": "189fa557-68b5-40ef-a02f-9614198e5932", 00:15:50.830 "strip_size_kb": 64, 00:15:50.830 "state": "online", 00:15:50.830 "raid_level": "raid5f", 00:15:50.830 "superblock": true, 00:15:50.830 "num_base_bdevs": 3, 00:15:50.830 "num_base_bdevs_discovered": 2, 00:15:50.830 "num_base_bdevs_operational": 2, 00:15:50.830 "base_bdevs_list": [ 00:15:50.830 { 00:15:50.830 "name": null, 00:15:50.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.830 "is_configured": false, 00:15:50.830 "data_offset": 0, 00:15:50.830 "data_size": 63488 00:15:50.830 }, 00:15:50.830 { 00:15:50.830 "name": "BaseBdev2", 00:15:50.830 "uuid": "a6804593-48ed-5d8f-a722-6b09e84488c1", 00:15:50.830 "is_configured": true, 00:15:50.830 "data_offset": 2048, 00:15:50.830 "data_size": 63488 00:15:50.830 }, 00:15:50.830 { 00:15:50.830 "name": "BaseBdev3", 00:15:50.830 "uuid": "152f4d26-1da0-5376-bfb6-83663ab5af11", 00:15:50.830 "is_configured": true, 00:15:50.830 "data_offset": 2048, 00:15:50.830 "data_size": 63488 00:15:50.830 } 00:15:50.830 ] 00:15:50.830 }' 00:15:50.830 12:07:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:50.830 12:07:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:50.830 12:07:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:50.830 12:07:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:50.830 12:07:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:50.830 12:07:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.830 12:07:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.830 12:07:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.830 12:07:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:50.830 12:07:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.830 12:07:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.830 [2024-11-19 12:07:54.074774] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:50.830 [2024-11-19 12:07:54.074824] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:50.830 [2024-11-19 12:07:54.074846] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:15:50.830 [2024-11-19 12:07:54.074854] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:50.830 [2024-11-19 12:07:54.075329] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:50.830 [2024-11-19 12:07:54.075353] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:50.830 [2024-11-19 12:07:54.075434] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:50.830 [2024-11-19 12:07:54.075448] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:50.830 [2024-11-19 12:07:54.075470] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:50.830 [2024-11-19 12:07:54.075480] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:50.830 BaseBdev1 00:15:50.830 12:07:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.830 12:07:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:51.768 12:07:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:51.768 12:07:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:51.768 12:07:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:51.768 12:07:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:51.768 12:07:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:51.768 12:07:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:51.768 12:07:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.768 12:07:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.768 12:07:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.768 12:07:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.768 12:07:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.768 12:07:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.768 12:07:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.769 12:07:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.769 12:07:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.769 12:07:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.769 "name": "raid_bdev1", 00:15:51.769 "uuid": "189fa557-68b5-40ef-a02f-9614198e5932", 00:15:51.769 "strip_size_kb": 64, 00:15:51.769 "state": "online", 00:15:51.769 "raid_level": "raid5f", 00:15:51.769 "superblock": true, 00:15:51.769 "num_base_bdevs": 3, 00:15:51.769 "num_base_bdevs_discovered": 2, 00:15:51.769 "num_base_bdevs_operational": 2, 00:15:51.769 "base_bdevs_list": [ 00:15:51.769 { 00:15:51.769 "name": null, 00:15:51.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.769 "is_configured": false, 00:15:51.769 "data_offset": 0, 00:15:51.769 "data_size": 63488 00:15:51.769 }, 00:15:51.769 { 00:15:51.769 "name": "BaseBdev2", 00:15:51.769 "uuid": "a6804593-48ed-5d8f-a722-6b09e84488c1", 00:15:51.769 "is_configured": true, 00:15:51.769 "data_offset": 2048, 00:15:51.769 "data_size": 63488 00:15:51.769 }, 00:15:51.769 { 00:15:51.769 "name": "BaseBdev3", 00:15:51.769 "uuid": "152f4d26-1da0-5376-bfb6-83663ab5af11", 00:15:51.769 "is_configured": true, 00:15:51.769 "data_offset": 2048, 00:15:51.769 "data_size": 63488 00:15:51.769 } 00:15:51.769 ] 00:15:51.769 }' 00:15:51.769 12:07:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.769 12:07:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.340 12:07:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:52.340 12:07:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:52.340 12:07:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:52.340 12:07:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:52.340 12:07:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:52.340 12:07:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.340 12:07:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.340 12:07:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.340 12:07:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.340 12:07:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.340 12:07:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:52.340 "name": "raid_bdev1", 00:15:52.340 "uuid": "189fa557-68b5-40ef-a02f-9614198e5932", 00:15:52.340 "strip_size_kb": 64, 00:15:52.340 "state": "online", 00:15:52.340 "raid_level": "raid5f", 00:15:52.340 "superblock": true, 00:15:52.340 "num_base_bdevs": 3, 00:15:52.340 "num_base_bdevs_discovered": 2, 00:15:52.340 "num_base_bdevs_operational": 2, 00:15:52.340 "base_bdevs_list": [ 00:15:52.340 { 00:15:52.340 "name": null, 00:15:52.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.340 "is_configured": false, 00:15:52.340 "data_offset": 0, 00:15:52.340 "data_size": 63488 00:15:52.340 }, 00:15:52.340 { 00:15:52.340 "name": "BaseBdev2", 00:15:52.340 "uuid": "a6804593-48ed-5d8f-a722-6b09e84488c1", 00:15:52.340 "is_configured": true, 00:15:52.340 "data_offset": 2048, 00:15:52.340 "data_size": 63488 00:15:52.340 }, 00:15:52.340 { 00:15:52.340 "name": "BaseBdev3", 00:15:52.340 "uuid": "152f4d26-1da0-5376-bfb6-83663ab5af11", 00:15:52.340 "is_configured": true, 00:15:52.340 "data_offset": 2048, 00:15:52.340 "data_size": 63488 00:15:52.340 } 00:15:52.340 ] 00:15:52.340 }' 00:15:52.340 12:07:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:52.340 12:07:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:52.340 12:07:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:52.340 12:07:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:52.341 12:07:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:52.341 12:07:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:15:52.341 12:07:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:52.341 12:07:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:52.341 12:07:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:52.341 12:07:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:52.341 12:07:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:52.341 12:07:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:52.341 12:07:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.341 12:07:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.341 [2024-11-19 12:07:55.696095] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:52.341 [2024-11-19 12:07:55.696265] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:52.341 [2024-11-19 12:07:55.696286] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:52.341 request: 00:15:52.341 { 00:15:52.341 "base_bdev": "BaseBdev1", 00:15:52.341 "raid_bdev": "raid_bdev1", 00:15:52.341 "method": "bdev_raid_add_base_bdev", 00:15:52.341 "req_id": 1 00:15:52.341 } 00:15:52.341 Got JSON-RPC error response 00:15:52.341 response: 00:15:52.341 { 00:15:52.341 "code": -22, 00:15:52.341 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:52.341 } 00:15:52.341 12:07:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:52.341 12:07:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:15:52.341 12:07:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:52.341 12:07:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:52.341 12:07:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:52.341 12:07:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:53.746 12:07:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:53.746 12:07:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:53.746 12:07:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:53.746 12:07:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:53.746 12:07:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:53.746 12:07:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:53.746 12:07:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:53.746 12:07:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:53.746 12:07:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:53.746 12:07:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:53.746 12:07:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.746 12:07:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.746 12:07:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.746 12:07:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.746 12:07:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.746 12:07:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:53.746 "name": "raid_bdev1", 00:15:53.746 "uuid": "189fa557-68b5-40ef-a02f-9614198e5932", 00:15:53.746 "strip_size_kb": 64, 00:15:53.746 "state": "online", 00:15:53.746 "raid_level": "raid5f", 00:15:53.746 "superblock": true, 00:15:53.746 "num_base_bdevs": 3, 00:15:53.746 "num_base_bdevs_discovered": 2, 00:15:53.746 "num_base_bdevs_operational": 2, 00:15:53.746 "base_bdevs_list": [ 00:15:53.746 { 00:15:53.746 "name": null, 00:15:53.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.746 "is_configured": false, 00:15:53.746 "data_offset": 0, 00:15:53.746 "data_size": 63488 00:15:53.746 }, 00:15:53.746 { 00:15:53.746 "name": "BaseBdev2", 00:15:53.746 "uuid": "a6804593-48ed-5d8f-a722-6b09e84488c1", 00:15:53.746 "is_configured": true, 00:15:53.746 "data_offset": 2048, 00:15:53.746 "data_size": 63488 00:15:53.746 }, 00:15:53.746 { 00:15:53.746 "name": "BaseBdev3", 00:15:53.746 "uuid": "152f4d26-1da0-5376-bfb6-83663ab5af11", 00:15:53.746 "is_configured": true, 00:15:53.746 "data_offset": 2048, 00:15:53.746 "data_size": 63488 00:15:53.746 } 00:15:53.746 ] 00:15:53.746 }' 00:15:53.746 12:07:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:53.746 12:07:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.746 12:07:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:53.746 12:07:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:53.746 12:07:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:53.746 12:07:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:53.746 12:07:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:53.746 12:07:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.746 12:07:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.746 12:07:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.746 12:07:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.006 12:07:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.006 12:07:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:54.006 "name": "raid_bdev1", 00:15:54.006 "uuid": "189fa557-68b5-40ef-a02f-9614198e5932", 00:15:54.006 "strip_size_kb": 64, 00:15:54.006 "state": "online", 00:15:54.006 "raid_level": "raid5f", 00:15:54.006 "superblock": true, 00:15:54.006 "num_base_bdevs": 3, 00:15:54.006 "num_base_bdevs_discovered": 2, 00:15:54.006 "num_base_bdevs_operational": 2, 00:15:54.006 "base_bdevs_list": [ 00:15:54.006 { 00:15:54.006 "name": null, 00:15:54.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.006 "is_configured": false, 00:15:54.006 "data_offset": 0, 00:15:54.006 "data_size": 63488 00:15:54.006 }, 00:15:54.006 { 00:15:54.006 "name": "BaseBdev2", 00:15:54.006 "uuid": "a6804593-48ed-5d8f-a722-6b09e84488c1", 00:15:54.006 "is_configured": true, 00:15:54.006 "data_offset": 2048, 00:15:54.006 "data_size": 63488 00:15:54.006 }, 00:15:54.006 { 00:15:54.006 "name": "BaseBdev3", 00:15:54.006 "uuid": "152f4d26-1da0-5376-bfb6-83663ab5af11", 00:15:54.006 "is_configured": true, 00:15:54.006 "data_offset": 2048, 00:15:54.006 "data_size": 63488 00:15:54.006 } 00:15:54.006 ] 00:15:54.006 }' 00:15:54.006 12:07:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:54.006 12:07:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:54.006 12:07:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:54.006 12:07:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:54.006 12:07:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 81946 00:15:54.006 12:07:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 81946 ']' 00:15:54.006 12:07:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 81946 00:15:54.006 12:07:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:54.006 12:07:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:54.006 12:07:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81946 00:15:54.006 killing process with pid 81946 00:15:54.006 Received shutdown signal, test time was about 60.000000 seconds 00:15:54.006 00:15:54.006 Latency(us) 00:15:54.006 [2024-11-19T12:07:57.383Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:54.006 [2024-11-19T12:07:57.383Z] =================================================================================================================== 00:15:54.006 [2024-11-19T12:07:57.383Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:54.006 12:07:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:54.006 12:07:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:54.006 12:07:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81946' 00:15:54.007 12:07:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 81946 00:15:54.007 [2024-11-19 12:07:57.302412] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:54.007 12:07:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 81946 00:15:54.007 [2024-11-19 12:07:57.302537] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:54.007 [2024-11-19 12:07:57.302611] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:54.007 [2024-11-19 12:07:57.302628] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:54.576 [2024-11-19 12:07:57.679052] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:55.515 12:07:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:15:55.515 00:15:55.515 real 0m23.078s 00:15:55.515 user 0m29.635s 00:15:55.515 sys 0m2.703s 00:15:55.515 12:07:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:55.515 12:07:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.515 ************************************ 00:15:55.515 END TEST raid5f_rebuild_test_sb 00:15:55.515 ************************************ 00:15:55.515 12:07:58 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:15:55.515 12:07:58 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:15:55.515 12:07:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:55.515 12:07:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:55.515 12:07:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:55.515 ************************************ 00:15:55.515 START TEST raid5f_state_function_test 00:15:55.515 ************************************ 00:15:55.515 12:07:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:15:55.515 12:07:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:55.515 12:07:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:15:55.515 12:07:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:15:55.515 12:07:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:55.515 12:07:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:55.515 12:07:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:55.515 12:07:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:55.515 12:07:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:55.515 12:07:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:55.515 12:07:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:55.515 12:07:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:55.515 12:07:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:55.515 12:07:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:55.515 12:07:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:55.515 12:07:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:55.515 12:07:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:15:55.515 12:07:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:55.515 12:07:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:55.515 12:07:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:55.515 12:07:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:55.515 12:07:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:55.515 12:07:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:55.515 12:07:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:55.515 12:07:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:55.515 12:07:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:55.515 12:07:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:55.515 12:07:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:55.515 12:07:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:15:55.515 12:07:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:15:55.515 12:07:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=82695 00:15:55.516 12:07:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:55.516 12:07:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82695' 00:15:55.516 Process raid pid: 82695 00:15:55.516 12:07:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 82695 00:15:55.516 12:07:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 82695 ']' 00:15:55.516 12:07:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:55.516 12:07:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:55.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:55.516 12:07:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:55.516 12:07:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:55.516 12:07:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.516 [2024-11-19 12:07:58.888987] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:15:55.516 [2024-11-19 12:07:58.889104] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:55.776 [2024-11-19 12:07:59.060926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:56.035 [2024-11-19 12:07:59.179270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:56.035 [2024-11-19 12:07:59.379594] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:56.035 [2024-11-19 12:07:59.379624] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:56.603 12:07:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:56.603 12:07:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:15:56.603 12:07:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:56.603 12:07:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.603 12:07:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.603 [2024-11-19 12:07:59.705592] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:56.603 [2024-11-19 12:07:59.705659] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:56.603 [2024-11-19 12:07:59.705669] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:56.603 [2024-11-19 12:07:59.705678] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:56.603 [2024-11-19 12:07:59.705685] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:56.603 [2024-11-19 12:07:59.705693] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:56.603 [2024-11-19 12:07:59.705700] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:56.603 [2024-11-19 12:07:59.705708] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:56.603 12:07:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.603 12:07:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:56.603 12:07:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:56.603 12:07:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:56.603 12:07:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:56.603 12:07:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:56.603 12:07:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:56.603 12:07:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.603 12:07:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.603 12:07:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.603 12:07:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.603 12:07:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.603 12:07:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:56.603 12:07:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.603 12:07:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.603 12:07:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.603 12:07:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.603 "name": "Existed_Raid", 00:15:56.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.603 "strip_size_kb": 64, 00:15:56.603 "state": "configuring", 00:15:56.603 "raid_level": "raid5f", 00:15:56.603 "superblock": false, 00:15:56.603 "num_base_bdevs": 4, 00:15:56.603 "num_base_bdevs_discovered": 0, 00:15:56.603 "num_base_bdevs_operational": 4, 00:15:56.603 "base_bdevs_list": [ 00:15:56.603 { 00:15:56.603 "name": "BaseBdev1", 00:15:56.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.603 "is_configured": false, 00:15:56.603 "data_offset": 0, 00:15:56.603 "data_size": 0 00:15:56.603 }, 00:15:56.603 { 00:15:56.603 "name": "BaseBdev2", 00:15:56.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.603 "is_configured": false, 00:15:56.603 "data_offset": 0, 00:15:56.603 "data_size": 0 00:15:56.603 }, 00:15:56.603 { 00:15:56.603 "name": "BaseBdev3", 00:15:56.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.603 "is_configured": false, 00:15:56.603 "data_offset": 0, 00:15:56.603 "data_size": 0 00:15:56.603 }, 00:15:56.603 { 00:15:56.603 "name": "BaseBdev4", 00:15:56.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.603 "is_configured": false, 00:15:56.603 "data_offset": 0, 00:15:56.603 "data_size": 0 00:15:56.603 } 00:15:56.603 ] 00:15:56.603 }' 00:15:56.603 12:07:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.603 12:07:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.862 12:08:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:56.862 12:08:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.862 12:08:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.862 [2024-11-19 12:08:00.164737] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:56.862 [2024-11-19 12:08:00.164779] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:56.862 12:08:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.862 12:08:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:56.862 12:08:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.862 12:08:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.862 [2024-11-19 12:08:00.176714] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:56.862 [2024-11-19 12:08:00.176757] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:56.862 [2024-11-19 12:08:00.176766] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:56.862 [2024-11-19 12:08:00.176775] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:56.862 [2024-11-19 12:08:00.176781] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:56.862 [2024-11-19 12:08:00.176790] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:56.862 [2024-11-19 12:08:00.176796] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:56.862 [2024-11-19 12:08:00.176804] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:56.862 12:08:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.862 12:08:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:56.862 12:08:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.862 12:08:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.862 [2024-11-19 12:08:00.222951] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:56.862 BaseBdev1 00:15:56.862 12:08:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.862 12:08:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:56.862 12:08:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:56.862 12:08:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:56.862 12:08:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:56.862 12:08:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:56.862 12:08:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:56.862 12:08:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:56.862 12:08:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.862 12:08:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.862 12:08:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.862 12:08:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:57.122 12:08:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.122 12:08:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.122 [ 00:15:57.122 { 00:15:57.122 "name": "BaseBdev1", 00:15:57.122 "aliases": [ 00:15:57.122 "029386bd-c32b-4b3c-9e9b-8c2d75321d99" 00:15:57.122 ], 00:15:57.122 "product_name": "Malloc disk", 00:15:57.122 "block_size": 512, 00:15:57.122 "num_blocks": 65536, 00:15:57.122 "uuid": "029386bd-c32b-4b3c-9e9b-8c2d75321d99", 00:15:57.122 "assigned_rate_limits": { 00:15:57.122 "rw_ios_per_sec": 0, 00:15:57.122 "rw_mbytes_per_sec": 0, 00:15:57.122 "r_mbytes_per_sec": 0, 00:15:57.122 "w_mbytes_per_sec": 0 00:15:57.122 }, 00:15:57.122 "claimed": true, 00:15:57.122 "claim_type": "exclusive_write", 00:15:57.122 "zoned": false, 00:15:57.122 "supported_io_types": { 00:15:57.122 "read": true, 00:15:57.122 "write": true, 00:15:57.122 "unmap": true, 00:15:57.122 "flush": true, 00:15:57.122 "reset": true, 00:15:57.122 "nvme_admin": false, 00:15:57.122 "nvme_io": false, 00:15:57.122 "nvme_io_md": false, 00:15:57.122 "write_zeroes": true, 00:15:57.122 "zcopy": true, 00:15:57.122 "get_zone_info": false, 00:15:57.122 "zone_management": false, 00:15:57.122 "zone_append": false, 00:15:57.122 "compare": false, 00:15:57.122 "compare_and_write": false, 00:15:57.122 "abort": true, 00:15:57.122 "seek_hole": false, 00:15:57.122 "seek_data": false, 00:15:57.122 "copy": true, 00:15:57.122 "nvme_iov_md": false 00:15:57.122 }, 00:15:57.122 "memory_domains": [ 00:15:57.122 { 00:15:57.122 "dma_device_id": "system", 00:15:57.122 "dma_device_type": 1 00:15:57.122 }, 00:15:57.122 { 00:15:57.122 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:57.122 "dma_device_type": 2 00:15:57.122 } 00:15:57.122 ], 00:15:57.122 "driver_specific": {} 00:15:57.122 } 00:15:57.122 ] 00:15:57.122 12:08:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.122 12:08:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:57.122 12:08:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:57.122 12:08:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:57.122 12:08:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:57.122 12:08:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:57.122 12:08:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:57.122 12:08:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:57.122 12:08:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.122 12:08:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.122 12:08:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.122 12:08:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.122 12:08:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.122 12:08:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:57.122 12:08:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.122 12:08:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.122 12:08:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.122 12:08:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.122 "name": "Existed_Raid", 00:15:57.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.122 "strip_size_kb": 64, 00:15:57.122 "state": "configuring", 00:15:57.122 "raid_level": "raid5f", 00:15:57.122 "superblock": false, 00:15:57.122 "num_base_bdevs": 4, 00:15:57.122 "num_base_bdevs_discovered": 1, 00:15:57.122 "num_base_bdevs_operational": 4, 00:15:57.122 "base_bdevs_list": [ 00:15:57.122 { 00:15:57.122 "name": "BaseBdev1", 00:15:57.122 "uuid": "029386bd-c32b-4b3c-9e9b-8c2d75321d99", 00:15:57.122 "is_configured": true, 00:15:57.122 "data_offset": 0, 00:15:57.122 "data_size": 65536 00:15:57.122 }, 00:15:57.122 { 00:15:57.122 "name": "BaseBdev2", 00:15:57.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.122 "is_configured": false, 00:15:57.122 "data_offset": 0, 00:15:57.122 "data_size": 0 00:15:57.122 }, 00:15:57.122 { 00:15:57.122 "name": "BaseBdev3", 00:15:57.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.122 "is_configured": false, 00:15:57.122 "data_offset": 0, 00:15:57.122 "data_size": 0 00:15:57.122 }, 00:15:57.122 { 00:15:57.122 "name": "BaseBdev4", 00:15:57.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.122 "is_configured": false, 00:15:57.122 "data_offset": 0, 00:15:57.122 "data_size": 0 00:15:57.122 } 00:15:57.122 ] 00:15:57.122 }' 00:15:57.122 12:08:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.122 12:08:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.383 12:08:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:57.383 12:08:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.383 12:08:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.383 [2024-11-19 12:08:00.722116] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:57.383 [2024-11-19 12:08:00.722168] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:57.383 12:08:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.383 12:08:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:57.383 12:08:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.383 12:08:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.383 [2024-11-19 12:08:00.734177] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:57.383 [2024-11-19 12:08:00.735933] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:57.383 [2024-11-19 12:08:00.735976] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:57.383 [2024-11-19 12:08:00.735986] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:57.383 [2024-11-19 12:08:00.736006] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:57.383 [2024-11-19 12:08:00.736014] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:57.383 [2024-11-19 12:08:00.736022] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:57.383 12:08:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.383 12:08:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:57.383 12:08:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:57.383 12:08:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:57.383 12:08:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:57.383 12:08:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:57.383 12:08:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:57.383 12:08:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:57.383 12:08:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:57.383 12:08:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.383 12:08:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.383 12:08:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.383 12:08:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.383 12:08:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.383 12:08:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.383 12:08:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.383 12:08:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:57.642 12:08:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.642 12:08:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.642 "name": "Existed_Raid", 00:15:57.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.642 "strip_size_kb": 64, 00:15:57.642 "state": "configuring", 00:15:57.642 "raid_level": "raid5f", 00:15:57.642 "superblock": false, 00:15:57.643 "num_base_bdevs": 4, 00:15:57.643 "num_base_bdevs_discovered": 1, 00:15:57.643 "num_base_bdevs_operational": 4, 00:15:57.643 "base_bdevs_list": [ 00:15:57.643 { 00:15:57.643 "name": "BaseBdev1", 00:15:57.643 "uuid": "029386bd-c32b-4b3c-9e9b-8c2d75321d99", 00:15:57.643 "is_configured": true, 00:15:57.643 "data_offset": 0, 00:15:57.643 "data_size": 65536 00:15:57.643 }, 00:15:57.643 { 00:15:57.643 "name": "BaseBdev2", 00:15:57.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.643 "is_configured": false, 00:15:57.643 "data_offset": 0, 00:15:57.643 "data_size": 0 00:15:57.643 }, 00:15:57.643 { 00:15:57.643 "name": "BaseBdev3", 00:15:57.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.643 "is_configured": false, 00:15:57.643 "data_offset": 0, 00:15:57.643 "data_size": 0 00:15:57.643 }, 00:15:57.643 { 00:15:57.643 "name": "BaseBdev4", 00:15:57.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.643 "is_configured": false, 00:15:57.643 "data_offset": 0, 00:15:57.643 "data_size": 0 00:15:57.643 } 00:15:57.643 ] 00:15:57.643 }' 00:15:57.643 12:08:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.643 12:08:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.902 12:08:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:57.902 12:08:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.902 12:08:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.902 [2024-11-19 12:08:01.182854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:57.902 BaseBdev2 00:15:57.902 12:08:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.902 12:08:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:57.902 12:08:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:57.902 12:08:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:57.902 12:08:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:57.902 12:08:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:57.902 12:08:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:57.902 12:08:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:57.902 12:08:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.903 12:08:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.903 12:08:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.903 12:08:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:57.903 12:08:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.903 12:08:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.903 [ 00:15:57.903 { 00:15:57.903 "name": "BaseBdev2", 00:15:57.903 "aliases": [ 00:15:57.903 "e8327fee-58f8-472d-aee3-4287e9b8daa0" 00:15:57.903 ], 00:15:57.903 "product_name": "Malloc disk", 00:15:57.903 "block_size": 512, 00:15:57.903 "num_blocks": 65536, 00:15:57.903 "uuid": "e8327fee-58f8-472d-aee3-4287e9b8daa0", 00:15:57.903 "assigned_rate_limits": { 00:15:57.903 "rw_ios_per_sec": 0, 00:15:57.903 "rw_mbytes_per_sec": 0, 00:15:57.903 "r_mbytes_per_sec": 0, 00:15:57.903 "w_mbytes_per_sec": 0 00:15:57.903 }, 00:15:57.903 "claimed": true, 00:15:57.903 "claim_type": "exclusive_write", 00:15:57.903 "zoned": false, 00:15:57.903 "supported_io_types": { 00:15:57.903 "read": true, 00:15:57.903 "write": true, 00:15:57.903 "unmap": true, 00:15:57.903 "flush": true, 00:15:57.903 "reset": true, 00:15:57.903 "nvme_admin": false, 00:15:57.903 "nvme_io": false, 00:15:57.903 "nvme_io_md": false, 00:15:57.903 "write_zeroes": true, 00:15:57.903 "zcopy": true, 00:15:57.903 "get_zone_info": false, 00:15:57.903 "zone_management": false, 00:15:57.903 "zone_append": false, 00:15:57.903 "compare": false, 00:15:57.903 "compare_and_write": false, 00:15:57.903 "abort": true, 00:15:57.903 "seek_hole": false, 00:15:57.903 "seek_data": false, 00:15:57.903 "copy": true, 00:15:57.903 "nvme_iov_md": false 00:15:57.903 }, 00:15:57.903 "memory_domains": [ 00:15:57.903 { 00:15:57.903 "dma_device_id": "system", 00:15:57.903 "dma_device_type": 1 00:15:57.903 }, 00:15:57.903 { 00:15:57.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:57.903 "dma_device_type": 2 00:15:57.903 } 00:15:57.903 ], 00:15:57.903 "driver_specific": {} 00:15:57.903 } 00:15:57.903 ] 00:15:57.903 12:08:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.903 12:08:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:57.903 12:08:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:57.903 12:08:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:57.903 12:08:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:57.903 12:08:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:57.903 12:08:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:57.903 12:08:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:57.903 12:08:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:57.903 12:08:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:57.903 12:08:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.903 12:08:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.903 12:08:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.903 12:08:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.903 12:08:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.903 12:08:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:57.903 12:08:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.903 12:08:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.903 12:08:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.903 12:08:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.903 "name": "Existed_Raid", 00:15:57.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.903 "strip_size_kb": 64, 00:15:57.903 "state": "configuring", 00:15:57.903 "raid_level": "raid5f", 00:15:57.903 "superblock": false, 00:15:57.903 "num_base_bdevs": 4, 00:15:57.903 "num_base_bdevs_discovered": 2, 00:15:57.903 "num_base_bdevs_operational": 4, 00:15:57.903 "base_bdevs_list": [ 00:15:57.903 { 00:15:57.903 "name": "BaseBdev1", 00:15:57.903 "uuid": "029386bd-c32b-4b3c-9e9b-8c2d75321d99", 00:15:57.903 "is_configured": true, 00:15:57.903 "data_offset": 0, 00:15:57.903 "data_size": 65536 00:15:57.903 }, 00:15:57.903 { 00:15:57.903 "name": "BaseBdev2", 00:15:57.903 "uuid": "e8327fee-58f8-472d-aee3-4287e9b8daa0", 00:15:57.903 "is_configured": true, 00:15:57.903 "data_offset": 0, 00:15:57.903 "data_size": 65536 00:15:57.903 }, 00:15:57.903 { 00:15:57.903 "name": "BaseBdev3", 00:15:57.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.903 "is_configured": false, 00:15:57.903 "data_offset": 0, 00:15:57.903 "data_size": 0 00:15:57.903 }, 00:15:57.903 { 00:15:57.903 "name": "BaseBdev4", 00:15:57.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.903 "is_configured": false, 00:15:57.903 "data_offset": 0, 00:15:57.903 "data_size": 0 00:15:57.903 } 00:15:57.903 ] 00:15:57.903 }' 00:15:57.903 12:08:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.903 12:08:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.474 12:08:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:58.474 12:08:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.474 12:08:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.474 [2024-11-19 12:08:01.724379] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:58.474 BaseBdev3 00:15:58.474 12:08:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.474 12:08:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:58.474 12:08:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:58.474 12:08:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:58.474 12:08:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:58.474 12:08:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:58.474 12:08:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:58.474 12:08:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:58.474 12:08:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.474 12:08:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.474 12:08:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.474 12:08:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:58.474 12:08:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.474 12:08:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.474 [ 00:15:58.474 { 00:15:58.474 "name": "BaseBdev3", 00:15:58.474 "aliases": [ 00:15:58.474 "ae6bf559-905f-45cd-b74a-ccbe5f04e9cd" 00:15:58.474 ], 00:15:58.474 "product_name": "Malloc disk", 00:15:58.474 "block_size": 512, 00:15:58.474 "num_blocks": 65536, 00:15:58.474 "uuid": "ae6bf559-905f-45cd-b74a-ccbe5f04e9cd", 00:15:58.474 "assigned_rate_limits": { 00:15:58.474 "rw_ios_per_sec": 0, 00:15:58.474 "rw_mbytes_per_sec": 0, 00:15:58.474 "r_mbytes_per_sec": 0, 00:15:58.474 "w_mbytes_per_sec": 0 00:15:58.474 }, 00:15:58.474 "claimed": true, 00:15:58.474 "claim_type": "exclusive_write", 00:15:58.474 "zoned": false, 00:15:58.474 "supported_io_types": { 00:15:58.474 "read": true, 00:15:58.474 "write": true, 00:15:58.474 "unmap": true, 00:15:58.474 "flush": true, 00:15:58.474 "reset": true, 00:15:58.474 "nvme_admin": false, 00:15:58.474 "nvme_io": false, 00:15:58.474 "nvme_io_md": false, 00:15:58.474 "write_zeroes": true, 00:15:58.474 "zcopy": true, 00:15:58.474 "get_zone_info": false, 00:15:58.474 "zone_management": false, 00:15:58.474 "zone_append": false, 00:15:58.474 "compare": false, 00:15:58.474 "compare_and_write": false, 00:15:58.474 "abort": true, 00:15:58.474 "seek_hole": false, 00:15:58.474 "seek_data": false, 00:15:58.474 "copy": true, 00:15:58.474 "nvme_iov_md": false 00:15:58.474 }, 00:15:58.474 "memory_domains": [ 00:15:58.474 { 00:15:58.474 "dma_device_id": "system", 00:15:58.474 "dma_device_type": 1 00:15:58.474 }, 00:15:58.474 { 00:15:58.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:58.474 "dma_device_type": 2 00:15:58.474 } 00:15:58.474 ], 00:15:58.474 "driver_specific": {} 00:15:58.474 } 00:15:58.474 ] 00:15:58.474 12:08:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.474 12:08:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:58.474 12:08:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:58.474 12:08:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:58.474 12:08:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:58.474 12:08:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:58.474 12:08:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:58.474 12:08:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:58.474 12:08:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:58.474 12:08:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:58.474 12:08:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.474 12:08:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.474 12:08:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.474 12:08:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.474 12:08:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.474 12:08:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:58.474 12:08:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.474 12:08:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.474 12:08:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.474 12:08:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.474 "name": "Existed_Raid", 00:15:58.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.474 "strip_size_kb": 64, 00:15:58.474 "state": "configuring", 00:15:58.474 "raid_level": "raid5f", 00:15:58.474 "superblock": false, 00:15:58.474 "num_base_bdevs": 4, 00:15:58.474 "num_base_bdevs_discovered": 3, 00:15:58.474 "num_base_bdevs_operational": 4, 00:15:58.474 "base_bdevs_list": [ 00:15:58.474 { 00:15:58.474 "name": "BaseBdev1", 00:15:58.474 "uuid": "029386bd-c32b-4b3c-9e9b-8c2d75321d99", 00:15:58.474 "is_configured": true, 00:15:58.474 "data_offset": 0, 00:15:58.474 "data_size": 65536 00:15:58.474 }, 00:15:58.474 { 00:15:58.474 "name": "BaseBdev2", 00:15:58.474 "uuid": "e8327fee-58f8-472d-aee3-4287e9b8daa0", 00:15:58.474 "is_configured": true, 00:15:58.474 "data_offset": 0, 00:15:58.474 "data_size": 65536 00:15:58.474 }, 00:15:58.474 { 00:15:58.474 "name": "BaseBdev3", 00:15:58.474 "uuid": "ae6bf559-905f-45cd-b74a-ccbe5f04e9cd", 00:15:58.474 "is_configured": true, 00:15:58.474 "data_offset": 0, 00:15:58.474 "data_size": 65536 00:15:58.474 }, 00:15:58.474 { 00:15:58.474 "name": "BaseBdev4", 00:15:58.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.474 "is_configured": false, 00:15:58.474 "data_offset": 0, 00:15:58.474 "data_size": 0 00:15:58.474 } 00:15:58.474 ] 00:15:58.474 }' 00:15:58.474 12:08:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.474 12:08:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.042 12:08:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:59.042 12:08:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.042 12:08:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.042 [2024-11-19 12:08:02.232637] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:59.042 [2024-11-19 12:08:02.232706] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:59.042 [2024-11-19 12:08:02.232716] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:59.042 [2024-11-19 12:08:02.232970] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:59.042 [2024-11-19 12:08:02.239837] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:59.042 [2024-11-19 12:08:02.239865] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:59.042 [2024-11-19 12:08:02.240133] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:59.042 BaseBdev4 00:15:59.042 12:08:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.042 12:08:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:15:59.042 12:08:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:59.042 12:08:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:59.042 12:08:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:59.042 12:08:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:59.042 12:08:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:59.042 12:08:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:59.042 12:08:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.042 12:08:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.042 12:08:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.042 12:08:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:59.042 12:08:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.042 12:08:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.042 [ 00:15:59.042 { 00:15:59.042 "name": "BaseBdev4", 00:15:59.042 "aliases": [ 00:15:59.042 "abb1df4b-c549-4062-816a-1190167262ba" 00:15:59.042 ], 00:15:59.042 "product_name": "Malloc disk", 00:15:59.042 "block_size": 512, 00:15:59.042 "num_blocks": 65536, 00:15:59.042 "uuid": "abb1df4b-c549-4062-816a-1190167262ba", 00:15:59.042 "assigned_rate_limits": { 00:15:59.042 "rw_ios_per_sec": 0, 00:15:59.042 "rw_mbytes_per_sec": 0, 00:15:59.042 "r_mbytes_per_sec": 0, 00:15:59.042 "w_mbytes_per_sec": 0 00:15:59.042 }, 00:15:59.042 "claimed": true, 00:15:59.042 "claim_type": "exclusive_write", 00:15:59.042 "zoned": false, 00:15:59.042 "supported_io_types": { 00:15:59.042 "read": true, 00:15:59.042 "write": true, 00:15:59.042 "unmap": true, 00:15:59.042 "flush": true, 00:15:59.042 "reset": true, 00:15:59.042 "nvme_admin": false, 00:15:59.042 "nvme_io": false, 00:15:59.042 "nvme_io_md": false, 00:15:59.042 "write_zeroes": true, 00:15:59.042 "zcopy": true, 00:15:59.042 "get_zone_info": false, 00:15:59.042 "zone_management": false, 00:15:59.042 "zone_append": false, 00:15:59.042 "compare": false, 00:15:59.042 "compare_and_write": false, 00:15:59.042 "abort": true, 00:15:59.042 "seek_hole": false, 00:15:59.042 "seek_data": false, 00:15:59.042 "copy": true, 00:15:59.042 "nvme_iov_md": false 00:15:59.042 }, 00:15:59.042 "memory_domains": [ 00:15:59.042 { 00:15:59.042 "dma_device_id": "system", 00:15:59.042 "dma_device_type": 1 00:15:59.042 }, 00:15:59.042 { 00:15:59.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:59.042 "dma_device_type": 2 00:15:59.042 } 00:15:59.042 ], 00:15:59.042 "driver_specific": {} 00:15:59.042 } 00:15:59.042 ] 00:15:59.042 12:08:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.042 12:08:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:59.042 12:08:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:59.042 12:08:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:59.042 12:08:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:59.042 12:08:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:59.042 12:08:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:59.042 12:08:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:59.042 12:08:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:59.042 12:08:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:59.042 12:08:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.042 12:08:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.042 12:08:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.042 12:08:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.042 12:08:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.042 12:08:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.042 12:08:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.042 12:08:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:59.043 12:08:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.043 12:08:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.043 "name": "Existed_Raid", 00:15:59.043 "uuid": "d4b13bb9-96d3-4228-a8e6-b6d84018fa42", 00:15:59.043 "strip_size_kb": 64, 00:15:59.043 "state": "online", 00:15:59.043 "raid_level": "raid5f", 00:15:59.043 "superblock": false, 00:15:59.043 "num_base_bdevs": 4, 00:15:59.043 "num_base_bdevs_discovered": 4, 00:15:59.043 "num_base_bdevs_operational": 4, 00:15:59.043 "base_bdevs_list": [ 00:15:59.043 { 00:15:59.043 "name": "BaseBdev1", 00:15:59.043 "uuid": "029386bd-c32b-4b3c-9e9b-8c2d75321d99", 00:15:59.043 "is_configured": true, 00:15:59.043 "data_offset": 0, 00:15:59.043 "data_size": 65536 00:15:59.043 }, 00:15:59.043 { 00:15:59.043 "name": "BaseBdev2", 00:15:59.043 "uuid": "e8327fee-58f8-472d-aee3-4287e9b8daa0", 00:15:59.043 "is_configured": true, 00:15:59.043 "data_offset": 0, 00:15:59.043 "data_size": 65536 00:15:59.043 }, 00:15:59.043 { 00:15:59.043 "name": "BaseBdev3", 00:15:59.043 "uuid": "ae6bf559-905f-45cd-b74a-ccbe5f04e9cd", 00:15:59.043 "is_configured": true, 00:15:59.043 "data_offset": 0, 00:15:59.043 "data_size": 65536 00:15:59.043 }, 00:15:59.043 { 00:15:59.043 "name": "BaseBdev4", 00:15:59.043 "uuid": "abb1df4b-c549-4062-816a-1190167262ba", 00:15:59.043 "is_configured": true, 00:15:59.043 "data_offset": 0, 00:15:59.043 "data_size": 65536 00:15:59.043 } 00:15:59.043 ] 00:15:59.043 }' 00:15:59.043 12:08:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.043 12:08:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.613 12:08:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:59.613 12:08:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:59.613 12:08:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:59.613 12:08:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:59.613 12:08:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:59.613 12:08:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:59.613 12:08:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:59.613 12:08:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.613 12:08:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.613 12:08:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:59.613 [2024-11-19 12:08:02.723705] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:59.613 12:08:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.613 12:08:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:59.613 "name": "Existed_Raid", 00:15:59.613 "aliases": [ 00:15:59.613 "d4b13bb9-96d3-4228-a8e6-b6d84018fa42" 00:15:59.613 ], 00:15:59.613 "product_name": "Raid Volume", 00:15:59.613 "block_size": 512, 00:15:59.613 "num_blocks": 196608, 00:15:59.613 "uuid": "d4b13bb9-96d3-4228-a8e6-b6d84018fa42", 00:15:59.613 "assigned_rate_limits": { 00:15:59.613 "rw_ios_per_sec": 0, 00:15:59.613 "rw_mbytes_per_sec": 0, 00:15:59.613 "r_mbytes_per_sec": 0, 00:15:59.613 "w_mbytes_per_sec": 0 00:15:59.613 }, 00:15:59.613 "claimed": false, 00:15:59.613 "zoned": false, 00:15:59.613 "supported_io_types": { 00:15:59.613 "read": true, 00:15:59.613 "write": true, 00:15:59.613 "unmap": false, 00:15:59.613 "flush": false, 00:15:59.613 "reset": true, 00:15:59.613 "nvme_admin": false, 00:15:59.613 "nvme_io": false, 00:15:59.613 "nvme_io_md": false, 00:15:59.613 "write_zeroes": true, 00:15:59.613 "zcopy": false, 00:15:59.613 "get_zone_info": false, 00:15:59.613 "zone_management": false, 00:15:59.613 "zone_append": false, 00:15:59.613 "compare": false, 00:15:59.613 "compare_and_write": false, 00:15:59.613 "abort": false, 00:15:59.613 "seek_hole": false, 00:15:59.613 "seek_data": false, 00:15:59.613 "copy": false, 00:15:59.613 "nvme_iov_md": false 00:15:59.613 }, 00:15:59.613 "driver_specific": { 00:15:59.613 "raid": { 00:15:59.613 "uuid": "d4b13bb9-96d3-4228-a8e6-b6d84018fa42", 00:15:59.613 "strip_size_kb": 64, 00:15:59.613 "state": "online", 00:15:59.613 "raid_level": "raid5f", 00:15:59.613 "superblock": false, 00:15:59.613 "num_base_bdevs": 4, 00:15:59.613 "num_base_bdevs_discovered": 4, 00:15:59.613 "num_base_bdevs_operational": 4, 00:15:59.613 "base_bdevs_list": [ 00:15:59.613 { 00:15:59.613 "name": "BaseBdev1", 00:15:59.613 "uuid": "029386bd-c32b-4b3c-9e9b-8c2d75321d99", 00:15:59.613 "is_configured": true, 00:15:59.613 "data_offset": 0, 00:15:59.613 "data_size": 65536 00:15:59.613 }, 00:15:59.613 { 00:15:59.613 "name": "BaseBdev2", 00:15:59.613 "uuid": "e8327fee-58f8-472d-aee3-4287e9b8daa0", 00:15:59.613 "is_configured": true, 00:15:59.613 "data_offset": 0, 00:15:59.613 "data_size": 65536 00:15:59.613 }, 00:15:59.613 { 00:15:59.613 "name": "BaseBdev3", 00:15:59.613 "uuid": "ae6bf559-905f-45cd-b74a-ccbe5f04e9cd", 00:15:59.613 "is_configured": true, 00:15:59.613 "data_offset": 0, 00:15:59.613 "data_size": 65536 00:15:59.613 }, 00:15:59.613 { 00:15:59.614 "name": "BaseBdev4", 00:15:59.614 "uuid": "abb1df4b-c549-4062-816a-1190167262ba", 00:15:59.614 "is_configured": true, 00:15:59.614 "data_offset": 0, 00:15:59.614 "data_size": 65536 00:15:59.614 } 00:15:59.614 ] 00:15:59.614 } 00:15:59.614 } 00:15:59.614 }' 00:15:59.614 12:08:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:59.614 12:08:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:59.614 BaseBdev2 00:15:59.614 BaseBdev3 00:15:59.614 BaseBdev4' 00:15:59.614 12:08:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:59.614 12:08:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:59.614 12:08:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:59.614 12:08:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:59.614 12:08:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:59.614 12:08:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.614 12:08:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.614 12:08:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.614 12:08:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:59.614 12:08:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:59.614 12:08:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:59.614 12:08:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:59.614 12:08:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:59.614 12:08:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.614 12:08:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.614 12:08:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.614 12:08:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:59.614 12:08:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:59.614 12:08:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:59.614 12:08:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:59.614 12:08:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:59.614 12:08:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.614 12:08:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.614 12:08:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.614 12:08:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:59.614 12:08:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:59.614 12:08:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:59.614 12:08:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:59.614 12:08:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:59.614 12:08:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.614 12:08:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.614 12:08:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.875 12:08:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:59.875 12:08:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:59.875 12:08:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:59.875 12:08:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.875 12:08:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.875 [2024-11-19 12:08:03.019027] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:59.875 12:08:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.875 12:08:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:59.875 12:08:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:59.875 12:08:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:59.875 12:08:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:59.875 12:08:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:59.875 12:08:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:59.875 12:08:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:59.875 12:08:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:59.875 12:08:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:59.875 12:08:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:59.875 12:08:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:59.875 12:08:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.875 12:08:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.875 12:08:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.875 12:08:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.875 12:08:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:59.875 12:08:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.875 12:08:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.875 12:08:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.875 12:08:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.875 12:08:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.875 "name": "Existed_Raid", 00:15:59.875 "uuid": "d4b13bb9-96d3-4228-a8e6-b6d84018fa42", 00:15:59.875 "strip_size_kb": 64, 00:15:59.875 "state": "online", 00:15:59.875 "raid_level": "raid5f", 00:15:59.875 "superblock": false, 00:15:59.875 "num_base_bdevs": 4, 00:15:59.875 "num_base_bdevs_discovered": 3, 00:15:59.875 "num_base_bdevs_operational": 3, 00:15:59.875 "base_bdevs_list": [ 00:15:59.875 { 00:15:59.875 "name": null, 00:15:59.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.875 "is_configured": false, 00:15:59.875 "data_offset": 0, 00:15:59.875 "data_size": 65536 00:15:59.875 }, 00:15:59.875 { 00:15:59.875 "name": "BaseBdev2", 00:15:59.875 "uuid": "e8327fee-58f8-472d-aee3-4287e9b8daa0", 00:15:59.875 "is_configured": true, 00:15:59.875 "data_offset": 0, 00:15:59.875 "data_size": 65536 00:15:59.875 }, 00:15:59.875 { 00:15:59.875 "name": "BaseBdev3", 00:15:59.875 "uuid": "ae6bf559-905f-45cd-b74a-ccbe5f04e9cd", 00:15:59.875 "is_configured": true, 00:15:59.875 "data_offset": 0, 00:15:59.875 "data_size": 65536 00:15:59.875 }, 00:15:59.875 { 00:15:59.875 "name": "BaseBdev4", 00:15:59.875 "uuid": "abb1df4b-c549-4062-816a-1190167262ba", 00:15:59.876 "is_configured": true, 00:15:59.876 "data_offset": 0, 00:15:59.876 "data_size": 65536 00:15:59.876 } 00:15:59.876 ] 00:15:59.876 }' 00:15:59.876 12:08:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.876 12:08:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.446 12:08:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:00.446 12:08:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:00.446 12:08:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.446 12:08:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:00.446 12:08:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.446 12:08:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.446 12:08:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.446 12:08:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:00.446 12:08:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:00.446 12:08:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:00.446 12:08:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.446 12:08:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.446 [2024-11-19 12:08:03.591822] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:00.446 [2024-11-19 12:08:03.591926] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:00.446 [2024-11-19 12:08:03.683321] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:00.446 12:08:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.446 12:08:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:00.446 12:08:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:00.446 12:08:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.446 12:08:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.446 12:08:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:00.446 12:08:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.446 12:08:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.446 12:08:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:00.446 12:08:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:00.446 12:08:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:00.446 12:08:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.446 12:08:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.446 [2024-11-19 12:08:03.739208] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:00.706 12:08:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.706 12:08:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:00.706 12:08:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:00.706 12:08:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.706 12:08:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.706 12:08:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.706 12:08:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:00.706 12:08:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.706 12:08:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:00.706 12:08:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:00.706 12:08:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:00.706 12:08:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.706 12:08:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.706 [2024-11-19 12:08:03.889588] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:00.706 [2024-11-19 12:08:03.889641] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:00.706 12:08:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.706 12:08:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:00.706 12:08:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:00.706 12:08:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.706 12:08:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:00.706 12:08:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.706 12:08:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.706 12:08:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.706 12:08:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:00.706 12:08:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:00.706 12:08:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:00.706 12:08:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:00.706 12:08:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:00.706 12:08:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:00.706 12:08:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.706 12:08:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.706 BaseBdev2 00:16:00.706 12:08:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.706 12:08:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:00.706 12:08:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:00.706 12:08:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:00.706 12:08:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:00.706 12:08:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:00.706 12:08:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:00.706 12:08:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:00.706 12:08:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.706 12:08:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.972 12:08:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.972 12:08:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:00.972 12:08:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.972 12:08:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.972 [ 00:16:00.972 { 00:16:00.972 "name": "BaseBdev2", 00:16:00.972 "aliases": [ 00:16:00.972 "284c0b84-e174-474a-9e5c-c625b14ea01c" 00:16:00.972 ], 00:16:00.972 "product_name": "Malloc disk", 00:16:00.972 "block_size": 512, 00:16:00.972 "num_blocks": 65536, 00:16:00.972 "uuid": "284c0b84-e174-474a-9e5c-c625b14ea01c", 00:16:00.972 "assigned_rate_limits": { 00:16:00.972 "rw_ios_per_sec": 0, 00:16:00.972 "rw_mbytes_per_sec": 0, 00:16:00.972 "r_mbytes_per_sec": 0, 00:16:00.972 "w_mbytes_per_sec": 0 00:16:00.972 }, 00:16:00.972 "claimed": false, 00:16:00.972 "zoned": false, 00:16:00.972 "supported_io_types": { 00:16:00.972 "read": true, 00:16:00.972 "write": true, 00:16:00.972 "unmap": true, 00:16:00.972 "flush": true, 00:16:00.972 "reset": true, 00:16:00.972 "nvme_admin": false, 00:16:00.972 "nvme_io": false, 00:16:00.972 "nvme_io_md": false, 00:16:00.972 "write_zeroes": true, 00:16:00.972 "zcopy": true, 00:16:00.972 "get_zone_info": false, 00:16:00.972 "zone_management": false, 00:16:00.972 "zone_append": false, 00:16:00.972 "compare": false, 00:16:00.972 "compare_and_write": false, 00:16:00.972 "abort": true, 00:16:00.972 "seek_hole": false, 00:16:00.972 "seek_data": false, 00:16:00.972 "copy": true, 00:16:00.972 "nvme_iov_md": false 00:16:00.972 }, 00:16:00.972 "memory_domains": [ 00:16:00.972 { 00:16:00.972 "dma_device_id": "system", 00:16:00.972 "dma_device_type": 1 00:16:00.972 }, 00:16:00.972 { 00:16:00.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:00.972 "dma_device_type": 2 00:16:00.972 } 00:16:00.972 ], 00:16:00.972 "driver_specific": {} 00:16:00.972 } 00:16:00.972 ] 00:16:00.972 12:08:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.972 12:08:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:00.972 12:08:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:00.972 12:08:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:00.972 12:08:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:00.973 12:08:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.973 12:08:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.973 BaseBdev3 00:16:00.973 12:08:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.973 12:08:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:00.973 12:08:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:00.973 12:08:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:00.973 12:08:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:00.973 12:08:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:00.973 12:08:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:00.973 12:08:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:00.973 12:08:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.973 12:08:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.973 12:08:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.973 12:08:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:00.973 12:08:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.973 12:08:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.973 [ 00:16:00.973 { 00:16:00.973 "name": "BaseBdev3", 00:16:00.973 "aliases": [ 00:16:00.973 "d6488ea5-1847-48af-bcfe-ad232b58b68e" 00:16:00.973 ], 00:16:00.973 "product_name": "Malloc disk", 00:16:00.973 "block_size": 512, 00:16:00.973 "num_blocks": 65536, 00:16:00.973 "uuid": "d6488ea5-1847-48af-bcfe-ad232b58b68e", 00:16:00.973 "assigned_rate_limits": { 00:16:00.973 "rw_ios_per_sec": 0, 00:16:00.973 "rw_mbytes_per_sec": 0, 00:16:00.973 "r_mbytes_per_sec": 0, 00:16:00.973 "w_mbytes_per_sec": 0 00:16:00.973 }, 00:16:00.973 "claimed": false, 00:16:00.973 "zoned": false, 00:16:00.973 "supported_io_types": { 00:16:00.973 "read": true, 00:16:00.973 "write": true, 00:16:00.973 "unmap": true, 00:16:00.973 "flush": true, 00:16:00.973 "reset": true, 00:16:00.973 "nvme_admin": false, 00:16:00.973 "nvme_io": false, 00:16:00.973 "nvme_io_md": false, 00:16:00.973 "write_zeroes": true, 00:16:00.973 "zcopy": true, 00:16:00.973 "get_zone_info": false, 00:16:00.973 "zone_management": false, 00:16:00.973 "zone_append": false, 00:16:00.973 "compare": false, 00:16:00.973 "compare_and_write": false, 00:16:00.973 "abort": true, 00:16:00.973 "seek_hole": false, 00:16:00.973 "seek_data": false, 00:16:00.973 "copy": true, 00:16:00.973 "nvme_iov_md": false 00:16:00.973 }, 00:16:00.973 "memory_domains": [ 00:16:00.973 { 00:16:00.973 "dma_device_id": "system", 00:16:00.973 "dma_device_type": 1 00:16:00.973 }, 00:16:00.973 { 00:16:00.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:00.973 "dma_device_type": 2 00:16:00.973 } 00:16:00.973 ], 00:16:00.973 "driver_specific": {} 00:16:00.973 } 00:16:00.973 ] 00:16:00.973 12:08:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.973 12:08:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:00.973 12:08:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:00.973 12:08:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:00.973 12:08:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:00.973 12:08:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.973 12:08:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.973 BaseBdev4 00:16:00.973 12:08:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.973 12:08:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:00.973 12:08:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:00.973 12:08:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:00.973 12:08:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:00.973 12:08:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:00.973 12:08:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:00.973 12:08:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:00.973 12:08:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.973 12:08:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.973 12:08:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.973 12:08:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:00.973 12:08:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.973 12:08:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.973 [ 00:16:00.973 { 00:16:00.973 "name": "BaseBdev4", 00:16:00.973 "aliases": [ 00:16:00.973 "4b112488-cf1b-4614-9bf1-dd5bf693e393" 00:16:00.973 ], 00:16:00.973 "product_name": "Malloc disk", 00:16:00.973 "block_size": 512, 00:16:00.973 "num_blocks": 65536, 00:16:00.973 "uuid": "4b112488-cf1b-4614-9bf1-dd5bf693e393", 00:16:00.973 "assigned_rate_limits": { 00:16:00.973 "rw_ios_per_sec": 0, 00:16:00.973 "rw_mbytes_per_sec": 0, 00:16:00.973 "r_mbytes_per_sec": 0, 00:16:00.973 "w_mbytes_per_sec": 0 00:16:00.973 }, 00:16:00.973 "claimed": false, 00:16:00.973 "zoned": false, 00:16:00.973 "supported_io_types": { 00:16:00.973 "read": true, 00:16:00.973 "write": true, 00:16:00.973 "unmap": true, 00:16:00.973 "flush": true, 00:16:00.973 "reset": true, 00:16:00.973 "nvme_admin": false, 00:16:00.973 "nvme_io": false, 00:16:00.973 "nvme_io_md": false, 00:16:00.973 "write_zeroes": true, 00:16:00.973 "zcopy": true, 00:16:00.973 "get_zone_info": false, 00:16:00.973 "zone_management": false, 00:16:00.973 "zone_append": false, 00:16:00.973 "compare": false, 00:16:00.973 "compare_and_write": false, 00:16:00.973 "abort": true, 00:16:00.973 "seek_hole": false, 00:16:00.973 "seek_data": false, 00:16:00.973 "copy": true, 00:16:00.973 "nvme_iov_md": false 00:16:00.973 }, 00:16:00.973 "memory_domains": [ 00:16:00.973 { 00:16:00.973 "dma_device_id": "system", 00:16:00.973 "dma_device_type": 1 00:16:00.973 }, 00:16:00.973 { 00:16:00.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:00.973 "dma_device_type": 2 00:16:00.974 } 00:16:00.974 ], 00:16:00.974 "driver_specific": {} 00:16:00.974 } 00:16:00.974 ] 00:16:00.974 12:08:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.974 12:08:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:00.974 12:08:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:00.974 12:08:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:00.974 12:08:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:00.974 12:08:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.974 12:08:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.974 [2024-11-19 12:08:04.270115] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:00.974 [2024-11-19 12:08:04.270153] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:00.974 [2024-11-19 12:08:04.270189] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:00.974 [2024-11-19 12:08:04.271885] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:00.974 [2024-11-19 12:08:04.271951] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:00.974 12:08:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.974 12:08:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:00.974 12:08:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:00.974 12:08:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:00.974 12:08:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:00.974 12:08:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:00.974 12:08:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:00.974 12:08:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.974 12:08:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.974 12:08:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.974 12:08:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.974 12:08:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.974 12:08:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:00.974 12:08:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.974 12:08:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.974 12:08:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.974 12:08:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:00.974 "name": "Existed_Raid", 00:16:00.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.974 "strip_size_kb": 64, 00:16:00.974 "state": "configuring", 00:16:00.974 "raid_level": "raid5f", 00:16:00.974 "superblock": false, 00:16:00.974 "num_base_bdevs": 4, 00:16:00.974 "num_base_bdevs_discovered": 3, 00:16:00.974 "num_base_bdevs_operational": 4, 00:16:00.974 "base_bdevs_list": [ 00:16:00.974 { 00:16:00.974 "name": "BaseBdev1", 00:16:00.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.974 "is_configured": false, 00:16:00.974 "data_offset": 0, 00:16:00.974 "data_size": 0 00:16:00.974 }, 00:16:00.974 { 00:16:00.974 "name": "BaseBdev2", 00:16:00.974 "uuid": "284c0b84-e174-474a-9e5c-c625b14ea01c", 00:16:00.974 "is_configured": true, 00:16:00.974 "data_offset": 0, 00:16:00.974 "data_size": 65536 00:16:00.974 }, 00:16:00.974 { 00:16:00.974 "name": "BaseBdev3", 00:16:00.974 "uuid": "d6488ea5-1847-48af-bcfe-ad232b58b68e", 00:16:00.974 "is_configured": true, 00:16:00.974 "data_offset": 0, 00:16:00.974 "data_size": 65536 00:16:00.974 }, 00:16:00.974 { 00:16:00.974 "name": "BaseBdev4", 00:16:00.974 "uuid": "4b112488-cf1b-4614-9bf1-dd5bf693e393", 00:16:00.974 "is_configured": true, 00:16:00.974 "data_offset": 0, 00:16:00.974 "data_size": 65536 00:16:00.974 } 00:16:00.974 ] 00:16:00.974 }' 00:16:00.974 12:08:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:00.974 12:08:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.543 12:08:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:01.543 12:08:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.543 12:08:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.543 [2024-11-19 12:08:04.705389] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:01.543 12:08:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.543 12:08:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:01.543 12:08:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:01.543 12:08:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:01.543 12:08:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:01.543 12:08:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:01.543 12:08:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:01.543 12:08:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:01.543 12:08:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:01.543 12:08:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:01.543 12:08:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:01.543 12:08:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.543 12:08:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.543 12:08:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.543 12:08:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:01.543 12:08:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.543 12:08:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.543 "name": "Existed_Raid", 00:16:01.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.543 "strip_size_kb": 64, 00:16:01.543 "state": "configuring", 00:16:01.543 "raid_level": "raid5f", 00:16:01.543 "superblock": false, 00:16:01.543 "num_base_bdevs": 4, 00:16:01.543 "num_base_bdevs_discovered": 2, 00:16:01.543 "num_base_bdevs_operational": 4, 00:16:01.544 "base_bdevs_list": [ 00:16:01.544 { 00:16:01.544 "name": "BaseBdev1", 00:16:01.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.544 "is_configured": false, 00:16:01.544 "data_offset": 0, 00:16:01.544 "data_size": 0 00:16:01.544 }, 00:16:01.544 { 00:16:01.544 "name": null, 00:16:01.544 "uuid": "284c0b84-e174-474a-9e5c-c625b14ea01c", 00:16:01.544 "is_configured": false, 00:16:01.544 "data_offset": 0, 00:16:01.544 "data_size": 65536 00:16:01.544 }, 00:16:01.544 { 00:16:01.544 "name": "BaseBdev3", 00:16:01.544 "uuid": "d6488ea5-1847-48af-bcfe-ad232b58b68e", 00:16:01.544 "is_configured": true, 00:16:01.544 "data_offset": 0, 00:16:01.544 "data_size": 65536 00:16:01.544 }, 00:16:01.544 { 00:16:01.544 "name": "BaseBdev4", 00:16:01.544 "uuid": "4b112488-cf1b-4614-9bf1-dd5bf693e393", 00:16:01.544 "is_configured": true, 00:16:01.544 "data_offset": 0, 00:16:01.544 "data_size": 65536 00:16:01.544 } 00:16:01.544 ] 00:16:01.544 }' 00:16:01.544 12:08:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.544 12:08:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.803 12:08:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.803 12:08:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.803 12:08:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.803 12:08:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:01.803 12:08:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.803 12:08:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:01.803 12:08:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:01.803 12:08:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.803 12:08:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.062 [2024-11-19 12:08:05.215205] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:02.062 BaseBdev1 00:16:02.062 12:08:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.062 12:08:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:02.062 12:08:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:02.062 12:08:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:02.062 12:08:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:02.062 12:08:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:02.062 12:08:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:02.062 12:08:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:02.062 12:08:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.062 12:08:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.062 12:08:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.062 12:08:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:02.062 12:08:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.062 12:08:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.062 [ 00:16:02.062 { 00:16:02.062 "name": "BaseBdev1", 00:16:02.062 "aliases": [ 00:16:02.062 "1f81e80b-7f0a-4df8-8b84-d499d5ec98ff" 00:16:02.062 ], 00:16:02.062 "product_name": "Malloc disk", 00:16:02.062 "block_size": 512, 00:16:02.062 "num_blocks": 65536, 00:16:02.062 "uuid": "1f81e80b-7f0a-4df8-8b84-d499d5ec98ff", 00:16:02.062 "assigned_rate_limits": { 00:16:02.062 "rw_ios_per_sec": 0, 00:16:02.062 "rw_mbytes_per_sec": 0, 00:16:02.062 "r_mbytes_per_sec": 0, 00:16:02.062 "w_mbytes_per_sec": 0 00:16:02.062 }, 00:16:02.062 "claimed": true, 00:16:02.062 "claim_type": "exclusive_write", 00:16:02.062 "zoned": false, 00:16:02.062 "supported_io_types": { 00:16:02.062 "read": true, 00:16:02.062 "write": true, 00:16:02.062 "unmap": true, 00:16:02.062 "flush": true, 00:16:02.062 "reset": true, 00:16:02.062 "nvme_admin": false, 00:16:02.062 "nvme_io": false, 00:16:02.062 "nvme_io_md": false, 00:16:02.062 "write_zeroes": true, 00:16:02.062 "zcopy": true, 00:16:02.062 "get_zone_info": false, 00:16:02.062 "zone_management": false, 00:16:02.062 "zone_append": false, 00:16:02.062 "compare": false, 00:16:02.062 "compare_and_write": false, 00:16:02.062 "abort": true, 00:16:02.062 "seek_hole": false, 00:16:02.062 "seek_data": false, 00:16:02.062 "copy": true, 00:16:02.062 "nvme_iov_md": false 00:16:02.062 }, 00:16:02.062 "memory_domains": [ 00:16:02.062 { 00:16:02.062 "dma_device_id": "system", 00:16:02.062 "dma_device_type": 1 00:16:02.062 }, 00:16:02.062 { 00:16:02.062 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:02.062 "dma_device_type": 2 00:16:02.062 } 00:16:02.062 ], 00:16:02.062 "driver_specific": {} 00:16:02.062 } 00:16:02.062 ] 00:16:02.062 12:08:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.062 12:08:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:02.062 12:08:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:02.062 12:08:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:02.062 12:08:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:02.063 12:08:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:02.063 12:08:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:02.063 12:08:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:02.063 12:08:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:02.063 12:08:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:02.063 12:08:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:02.063 12:08:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:02.063 12:08:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.063 12:08:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.063 12:08:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.063 12:08:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:02.063 12:08:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.063 12:08:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:02.063 "name": "Existed_Raid", 00:16:02.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.063 "strip_size_kb": 64, 00:16:02.063 "state": "configuring", 00:16:02.063 "raid_level": "raid5f", 00:16:02.063 "superblock": false, 00:16:02.063 "num_base_bdevs": 4, 00:16:02.063 "num_base_bdevs_discovered": 3, 00:16:02.063 "num_base_bdevs_operational": 4, 00:16:02.063 "base_bdevs_list": [ 00:16:02.063 { 00:16:02.063 "name": "BaseBdev1", 00:16:02.063 "uuid": "1f81e80b-7f0a-4df8-8b84-d499d5ec98ff", 00:16:02.063 "is_configured": true, 00:16:02.063 "data_offset": 0, 00:16:02.063 "data_size": 65536 00:16:02.063 }, 00:16:02.063 { 00:16:02.063 "name": null, 00:16:02.063 "uuid": "284c0b84-e174-474a-9e5c-c625b14ea01c", 00:16:02.063 "is_configured": false, 00:16:02.063 "data_offset": 0, 00:16:02.063 "data_size": 65536 00:16:02.063 }, 00:16:02.063 { 00:16:02.063 "name": "BaseBdev3", 00:16:02.063 "uuid": "d6488ea5-1847-48af-bcfe-ad232b58b68e", 00:16:02.063 "is_configured": true, 00:16:02.063 "data_offset": 0, 00:16:02.063 "data_size": 65536 00:16:02.063 }, 00:16:02.063 { 00:16:02.063 "name": "BaseBdev4", 00:16:02.063 "uuid": "4b112488-cf1b-4614-9bf1-dd5bf693e393", 00:16:02.063 "is_configured": true, 00:16:02.063 "data_offset": 0, 00:16:02.063 "data_size": 65536 00:16:02.063 } 00:16:02.063 ] 00:16:02.063 }' 00:16:02.063 12:08:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:02.063 12:08:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.631 12:08:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:02.631 12:08:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.631 12:08:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.631 12:08:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.631 12:08:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.631 12:08:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:02.631 12:08:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:02.631 12:08:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.631 12:08:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.631 [2024-11-19 12:08:05.758280] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:02.631 12:08:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.631 12:08:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:02.631 12:08:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:02.631 12:08:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:02.631 12:08:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:02.631 12:08:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:02.631 12:08:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:02.631 12:08:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:02.631 12:08:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:02.631 12:08:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:02.631 12:08:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:02.631 12:08:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.631 12:08:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:02.631 12:08:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.631 12:08:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.631 12:08:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.632 12:08:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:02.632 "name": "Existed_Raid", 00:16:02.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.632 "strip_size_kb": 64, 00:16:02.632 "state": "configuring", 00:16:02.632 "raid_level": "raid5f", 00:16:02.632 "superblock": false, 00:16:02.632 "num_base_bdevs": 4, 00:16:02.632 "num_base_bdevs_discovered": 2, 00:16:02.632 "num_base_bdevs_operational": 4, 00:16:02.632 "base_bdevs_list": [ 00:16:02.632 { 00:16:02.632 "name": "BaseBdev1", 00:16:02.632 "uuid": "1f81e80b-7f0a-4df8-8b84-d499d5ec98ff", 00:16:02.632 "is_configured": true, 00:16:02.632 "data_offset": 0, 00:16:02.632 "data_size": 65536 00:16:02.632 }, 00:16:02.632 { 00:16:02.632 "name": null, 00:16:02.632 "uuid": "284c0b84-e174-474a-9e5c-c625b14ea01c", 00:16:02.632 "is_configured": false, 00:16:02.632 "data_offset": 0, 00:16:02.632 "data_size": 65536 00:16:02.632 }, 00:16:02.632 { 00:16:02.632 "name": null, 00:16:02.632 "uuid": "d6488ea5-1847-48af-bcfe-ad232b58b68e", 00:16:02.632 "is_configured": false, 00:16:02.632 "data_offset": 0, 00:16:02.632 "data_size": 65536 00:16:02.632 }, 00:16:02.632 { 00:16:02.632 "name": "BaseBdev4", 00:16:02.632 "uuid": "4b112488-cf1b-4614-9bf1-dd5bf693e393", 00:16:02.632 "is_configured": true, 00:16:02.632 "data_offset": 0, 00:16:02.632 "data_size": 65536 00:16:02.632 } 00:16:02.632 ] 00:16:02.632 }' 00:16:02.632 12:08:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:02.632 12:08:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.891 12:08:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.891 12:08:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:02.891 12:08:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.891 12:08:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.891 12:08:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.891 12:08:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:02.891 12:08:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:02.891 12:08:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.891 12:08:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.891 [2024-11-19 12:08:06.265408] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:03.150 12:08:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.150 12:08:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:03.150 12:08:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:03.151 12:08:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:03.151 12:08:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:03.151 12:08:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:03.151 12:08:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:03.151 12:08:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:03.151 12:08:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:03.151 12:08:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:03.151 12:08:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:03.151 12:08:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.151 12:08:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.151 12:08:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.151 12:08:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:03.151 12:08:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.151 12:08:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:03.151 "name": "Existed_Raid", 00:16:03.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.151 "strip_size_kb": 64, 00:16:03.151 "state": "configuring", 00:16:03.151 "raid_level": "raid5f", 00:16:03.151 "superblock": false, 00:16:03.151 "num_base_bdevs": 4, 00:16:03.151 "num_base_bdevs_discovered": 3, 00:16:03.151 "num_base_bdevs_operational": 4, 00:16:03.151 "base_bdevs_list": [ 00:16:03.151 { 00:16:03.151 "name": "BaseBdev1", 00:16:03.151 "uuid": "1f81e80b-7f0a-4df8-8b84-d499d5ec98ff", 00:16:03.151 "is_configured": true, 00:16:03.151 "data_offset": 0, 00:16:03.151 "data_size": 65536 00:16:03.151 }, 00:16:03.151 { 00:16:03.151 "name": null, 00:16:03.151 "uuid": "284c0b84-e174-474a-9e5c-c625b14ea01c", 00:16:03.151 "is_configured": false, 00:16:03.151 "data_offset": 0, 00:16:03.151 "data_size": 65536 00:16:03.151 }, 00:16:03.151 { 00:16:03.151 "name": "BaseBdev3", 00:16:03.151 "uuid": "d6488ea5-1847-48af-bcfe-ad232b58b68e", 00:16:03.151 "is_configured": true, 00:16:03.151 "data_offset": 0, 00:16:03.151 "data_size": 65536 00:16:03.151 }, 00:16:03.151 { 00:16:03.151 "name": "BaseBdev4", 00:16:03.151 "uuid": "4b112488-cf1b-4614-9bf1-dd5bf693e393", 00:16:03.151 "is_configured": true, 00:16:03.151 "data_offset": 0, 00:16:03.151 "data_size": 65536 00:16:03.151 } 00:16:03.151 ] 00:16:03.151 }' 00:16:03.151 12:08:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:03.151 12:08:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.410 12:08:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.411 12:08:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.411 12:08:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.411 12:08:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:03.411 12:08:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.411 12:08:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:03.411 12:08:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:03.411 12:08:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.411 12:08:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.670 [2024-11-19 12:08:06.788538] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:03.670 12:08:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.671 12:08:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:03.671 12:08:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:03.671 12:08:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:03.671 12:08:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:03.671 12:08:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:03.671 12:08:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:03.671 12:08:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:03.671 12:08:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:03.671 12:08:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:03.671 12:08:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:03.671 12:08:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.671 12:08:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.671 12:08:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.671 12:08:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:03.671 12:08:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.671 12:08:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:03.671 "name": "Existed_Raid", 00:16:03.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.671 "strip_size_kb": 64, 00:16:03.671 "state": "configuring", 00:16:03.671 "raid_level": "raid5f", 00:16:03.671 "superblock": false, 00:16:03.671 "num_base_bdevs": 4, 00:16:03.671 "num_base_bdevs_discovered": 2, 00:16:03.671 "num_base_bdevs_operational": 4, 00:16:03.671 "base_bdevs_list": [ 00:16:03.671 { 00:16:03.671 "name": null, 00:16:03.671 "uuid": "1f81e80b-7f0a-4df8-8b84-d499d5ec98ff", 00:16:03.671 "is_configured": false, 00:16:03.671 "data_offset": 0, 00:16:03.671 "data_size": 65536 00:16:03.671 }, 00:16:03.671 { 00:16:03.671 "name": null, 00:16:03.671 "uuid": "284c0b84-e174-474a-9e5c-c625b14ea01c", 00:16:03.671 "is_configured": false, 00:16:03.671 "data_offset": 0, 00:16:03.671 "data_size": 65536 00:16:03.671 }, 00:16:03.671 { 00:16:03.671 "name": "BaseBdev3", 00:16:03.671 "uuid": "d6488ea5-1847-48af-bcfe-ad232b58b68e", 00:16:03.671 "is_configured": true, 00:16:03.671 "data_offset": 0, 00:16:03.671 "data_size": 65536 00:16:03.671 }, 00:16:03.671 { 00:16:03.671 "name": "BaseBdev4", 00:16:03.671 "uuid": "4b112488-cf1b-4614-9bf1-dd5bf693e393", 00:16:03.671 "is_configured": true, 00:16:03.671 "data_offset": 0, 00:16:03.671 "data_size": 65536 00:16:03.671 } 00:16:03.671 ] 00:16:03.671 }' 00:16:03.671 12:08:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:03.671 12:08:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.937 12:08:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.937 12:08:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.937 12:08:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.937 12:08:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:04.207 12:08:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.207 12:08:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:04.207 12:08:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:04.207 12:08:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.207 12:08:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.207 [2024-11-19 12:08:07.357038] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:04.207 12:08:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.207 12:08:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:04.207 12:08:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:04.207 12:08:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:04.207 12:08:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:04.207 12:08:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:04.208 12:08:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:04.208 12:08:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.208 12:08:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.208 12:08:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.208 12:08:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.208 12:08:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.208 12:08:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:04.208 12:08:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.208 12:08:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.208 12:08:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.208 12:08:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.208 "name": "Existed_Raid", 00:16:04.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.208 "strip_size_kb": 64, 00:16:04.208 "state": "configuring", 00:16:04.208 "raid_level": "raid5f", 00:16:04.208 "superblock": false, 00:16:04.208 "num_base_bdevs": 4, 00:16:04.208 "num_base_bdevs_discovered": 3, 00:16:04.208 "num_base_bdevs_operational": 4, 00:16:04.208 "base_bdevs_list": [ 00:16:04.208 { 00:16:04.208 "name": null, 00:16:04.208 "uuid": "1f81e80b-7f0a-4df8-8b84-d499d5ec98ff", 00:16:04.208 "is_configured": false, 00:16:04.208 "data_offset": 0, 00:16:04.208 "data_size": 65536 00:16:04.208 }, 00:16:04.208 { 00:16:04.208 "name": "BaseBdev2", 00:16:04.208 "uuid": "284c0b84-e174-474a-9e5c-c625b14ea01c", 00:16:04.208 "is_configured": true, 00:16:04.208 "data_offset": 0, 00:16:04.208 "data_size": 65536 00:16:04.208 }, 00:16:04.208 { 00:16:04.208 "name": "BaseBdev3", 00:16:04.208 "uuid": "d6488ea5-1847-48af-bcfe-ad232b58b68e", 00:16:04.208 "is_configured": true, 00:16:04.208 "data_offset": 0, 00:16:04.208 "data_size": 65536 00:16:04.208 }, 00:16:04.208 { 00:16:04.208 "name": "BaseBdev4", 00:16:04.208 "uuid": "4b112488-cf1b-4614-9bf1-dd5bf693e393", 00:16:04.208 "is_configured": true, 00:16:04.208 "data_offset": 0, 00:16:04.208 "data_size": 65536 00:16:04.208 } 00:16:04.208 ] 00:16:04.208 }' 00:16:04.208 12:08:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.208 12:08:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.466 12:08:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.466 12:08:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:04.466 12:08:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.466 12:08:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.726 12:08:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.726 12:08:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:04.726 12:08:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:04.726 12:08:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.726 12:08:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.726 12:08:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.726 12:08:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.726 12:08:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 1f81e80b-7f0a-4df8-8b84-d499d5ec98ff 00:16:04.726 12:08:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.726 12:08:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.726 [2024-11-19 12:08:07.960323] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:04.726 [2024-11-19 12:08:07.960377] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:04.726 [2024-11-19 12:08:07.960385] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:04.726 [2024-11-19 12:08:07.960632] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:04.726 [2024-11-19 12:08:07.967357] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:04.726 [2024-11-19 12:08:07.967385] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:04.726 [2024-11-19 12:08:07.967642] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:04.726 NewBaseBdev 00:16:04.726 12:08:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.726 12:08:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:04.726 12:08:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:04.726 12:08:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:04.726 12:08:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:04.726 12:08:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:04.726 12:08:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:04.726 12:08:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:04.726 12:08:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.726 12:08:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.726 12:08:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.726 12:08:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:04.726 12:08:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.726 12:08:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.726 [ 00:16:04.726 { 00:16:04.726 "name": "NewBaseBdev", 00:16:04.726 "aliases": [ 00:16:04.726 "1f81e80b-7f0a-4df8-8b84-d499d5ec98ff" 00:16:04.726 ], 00:16:04.726 "product_name": "Malloc disk", 00:16:04.726 "block_size": 512, 00:16:04.726 "num_blocks": 65536, 00:16:04.726 "uuid": "1f81e80b-7f0a-4df8-8b84-d499d5ec98ff", 00:16:04.726 "assigned_rate_limits": { 00:16:04.726 "rw_ios_per_sec": 0, 00:16:04.726 "rw_mbytes_per_sec": 0, 00:16:04.726 "r_mbytes_per_sec": 0, 00:16:04.726 "w_mbytes_per_sec": 0 00:16:04.726 }, 00:16:04.726 "claimed": true, 00:16:04.726 "claim_type": "exclusive_write", 00:16:04.726 "zoned": false, 00:16:04.726 "supported_io_types": { 00:16:04.726 "read": true, 00:16:04.726 "write": true, 00:16:04.726 "unmap": true, 00:16:04.726 "flush": true, 00:16:04.726 "reset": true, 00:16:04.726 "nvme_admin": false, 00:16:04.726 "nvme_io": false, 00:16:04.726 "nvme_io_md": false, 00:16:04.726 "write_zeroes": true, 00:16:04.726 "zcopy": true, 00:16:04.726 "get_zone_info": false, 00:16:04.726 "zone_management": false, 00:16:04.726 "zone_append": false, 00:16:04.726 "compare": false, 00:16:04.726 "compare_and_write": false, 00:16:04.726 "abort": true, 00:16:04.726 "seek_hole": false, 00:16:04.726 "seek_data": false, 00:16:04.726 "copy": true, 00:16:04.726 "nvme_iov_md": false 00:16:04.726 }, 00:16:04.726 "memory_domains": [ 00:16:04.726 { 00:16:04.726 "dma_device_id": "system", 00:16:04.726 "dma_device_type": 1 00:16:04.726 }, 00:16:04.726 { 00:16:04.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:04.726 "dma_device_type": 2 00:16:04.726 } 00:16:04.726 ], 00:16:04.726 "driver_specific": {} 00:16:04.726 } 00:16:04.726 ] 00:16:04.726 12:08:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.726 12:08:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:04.726 12:08:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:04.726 12:08:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:04.726 12:08:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:04.726 12:08:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:04.726 12:08:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:04.726 12:08:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:04.726 12:08:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.726 12:08:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.726 12:08:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.726 12:08:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.726 12:08:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:04.726 12:08:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.726 12:08:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.726 12:08:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.726 12:08:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.726 12:08:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.726 "name": "Existed_Raid", 00:16:04.726 "uuid": "266d3f5b-ccb7-4622-8db6-408a060a6b40", 00:16:04.726 "strip_size_kb": 64, 00:16:04.726 "state": "online", 00:16:04.726 "raid_level": "raid5f", 00:16:04.726 "superblock": false, 00:16:04.726 "num_base_bdevs": 4, 00:16:04.726 "num_base_bdevs_discovered": 4, 00:16:04.726 "num_base_bdevs_operational": 4, 00:16:04.726 "base_bdevs_list": [ 00:16:04.726 { 00:16:04.726 "name": "NewBaseBdev", 00:16:04.726 "uuid": "1f81e80b-7f0a-4df8-8b84-d499d5ec98ff", 00:16:04.726 "is_configured": true, 00:16:04.726 "data_offset": 0, 00:16:04.726 "data_size": 65536 00:16:04.726 }, 00:16:04.726 { 00:16:04.726 "name": "BaseBdev2", 00:16:04.726 "uuid": "284c0b84-e174-474a-9e5c-c625b14ea01c", 00:16:04.726 "is_configured": true, 00:16:04.726 "data_offset": 0, 00:16:04.726 "data_size": 65536 00:16:04.726 }, 00:16:04.726 { 00:16:04.726 "name": "BaseBdev3", 00:16:04.726 "uuid": "d6488ea5-1847-48af-bcfe-ad232b58b68e", 00:16:04.726 "is_configured": true, 00:16:04.726 "data_offset": 0, 00:16:04.726 "data_size": 65536 00:16:04.726 }, 00:16:04.726 { 00:16:04.726 "name": "BaseBdev4", 00:16:04.726 "uuid": "4b112488-cf1b-4614-9bf1-dd5bf693e393", 00:16:04.726 "is_configured": true, 00:16:04.726 "data_offset": 0, 00:16:04.726 "data_size": 65536 00:16:04.726 } 00:16:04.726 ] 00:16:04.726 }' 00:16:04.727 12:08:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.727 12:08:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.295 12:08:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:05.295 12:08:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:05.295 12:08:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:05.295 12:08:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:05.295 12:08:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:05.295 12:08:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:05.295 12:08:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:05.295 12:08:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:05.295 12:08:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.295 12:08:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.295 [2024-11-19 12:08:08.391263] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:05.295 12:08:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.295 12:08:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:05.295 "name": "Existed_Raid", 00:16:05.295 "aliases": [ 00:16:05.295 "266d3f5b-ccb7-4622-8db6-408a060a6b40" 00:16:05.295 ], 00:16:05.295 "product_name": "Raid Volume", 00:16:05.295 "block_size": 512, 00:16:05.295 "num_blocks": 196608, 00:16:05.295 "uuid": "266d3f5b-ccb7-4622-8db6-408a060a6b40", 00:16:05.295 "assigned_rate_limits": { 00:16:05.296 "rw_ios_per_sec": 0, 00:16:05.296 "rw_mbytes_per_sec": 0, 00:16:05.296 "r_mbytes_per_sec": 0, 00:16:05.296 "w_mbytes_per_sec": 0 00:16:05.296 }, 00:16:05.296 "claimed": false, 00:16:05.296 "zoned": false, 00:16:05.296 "supported_io_types": { 00:16:05.296 "read": true, 00:16:05.296 "write": true, 00:16:05.296 "unmap": false, 00:16:05.296 "flush": false, 00:16:05.296 "reset": true, 00:16:05.296 "nvme_admin": false, 00:16:05.296 "nvme_io": false, 00:16:05.296 "nvme_io_md": false, 00:16:05.296 "write_zeroes": true, 00:16:05.296 "zcopy": false, 00:16:05.296 "get_zone_info": false, 00:16:05.296 "zone_management": false, 00:16:05.296 "zone_append": false, 00:16:05.296 "compare": false, 00:16:05.296 "compare_and_write": false, 00:16:05.296 "abort": false, 00:16:05.296 "seek_hole": false, 00:16:05.296 "seek_data": false, 00:16:05.296 "copy": false, 00:16:05.296 "nvme_iov_md": false 00:16:05.296 }, 00:16:05.296 "driver_specific": { 00:16:05.296 "raid": { 00:16:05.296 "uuid": "266d3f5b-ccb7-4622-8db6-408a060a6b40", 00:16:05.296 "strip_size_kb": 64, 00:16:05.296 "state": "online", 00:16:05.296 "raid_level": "raid5f", 00:16:05.296 "superblock": false, 00:16:05.296 "num_base_bdevs": 4, 00:16:05.296 "num_base_bdevs_discovered": 4, 00:16:05.296 "num_base_bdevs_operational": 4, 00:16:05.296 "base_bdevs_list": [ 00:16:05.296 { 00:16:05.296 "name": "NewBaseBdev", 00:16:05.296 "uuid": "1f81e80b-7f0a-4df8-8b84-d499d5ec98ff", 00:16:05.296 "is_configured": true, 00:16:05.296 "data_offset": 0, 00:16:05.296 "data_size": 65536 00:16:05.296 }, 00:16:05.296 { 00:16:05.296 "name": "BaseBdev2", 00:16:05.296 "uuid": "284c0b84-e174-474a-9e5c-c625b14ea01c", 00:16:05.296 "is_configured": true, 00:16:05.296 "data_offset": 0, 00:16:05.296 "data_size": 65536 00:16:05.296 }, 00:16:05.296 { 00:16:05.296 "name": "BaseBdev3", 00:16:05.296 "uuid": "d6488ea5-1847-48af-bcfe-ad232b58b68e", 00:16:05.296 "is_configured": true, 00:16:05.296 "data_offset": 0, 00:16:05.296 "data_size": 65536 00:16:05.296 }, 00:16:05.296 { 00:16:05.296 "name": "BaseBdev4", 00:16:05.296 "uuid": "4b112488-cf1b-4614-9bf1-dd5bf693e393", 00:16:05.296 "is_configured": true, 00:16:05.296 "data_offset": 0, 00:16:05.296 "data_size": 65536 00:16:05.296 } 00:16:05.296 ] 00:16:05.296 } 00:16:05.296 } 00:16:05.296 }' 00:16:05.296 12:08:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:05.296 12:08:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:05.296 BaseBdev2 00:16:05.296 BaseBdev3 00:16:05.296 BaseBdev4' 00:16:05.296 12:08:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:05.296 12:08:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:05.296 12:08:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:05.296 12:08:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:05.296 12:08:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.296 12:08:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.296 12:08:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:05.296 12:08:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.296 12:08:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:05.296 12:08:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:05.296 12:08:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:05.296 12:08:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:05.296 12:08:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.296 12:08:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.296 12:08:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:05.296 12:08:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.296 12:08:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:05.296 12:08:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:05.296 12:08:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:05.296 12:08:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:05.296 12:08:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.296 12:08:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.296 12:08:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:05.296 12:08:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.556 12:08:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:05.556 12:08:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:05.556 12:08:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:05.556 12:08:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:05.556 12:08:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:05.556 12:08:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.556 12:08:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.556 12:08:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.556 12:08:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:05.556 12:08:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:05.556 12:08:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:05.556 12:08:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.556 12:08:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.556 [2024-11-19 12:08:08.734413] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:05.556 [2024-11-19 12:08:08.734444] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:05.556 [2024-11-19 12:08:08.734516] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:05.556 [2024-11-19 12:08:08.734805] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:05.556 [2024-11-19 12:08:08.734824] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:05.556 12:08:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.556 12:08:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 82695 00:16:05.556 12:08:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 82695 ']' 00:16:05.556 12:08:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 82695 00:16:05.556 12:08:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:16:05.556 12:08:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:05.556 12:08:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82695 00:16:05.556 12:08:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:05.556 12:08:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:05.556 killing process with pid 82695 00:16:05.556 12:08:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82695' 00:16:05.556 12:08:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 82695 00:16:05.556 [2024-11-19 12:08:08.780641] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:05.556 12:08:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 82695 00:16:05.816 [2024-11-19 12:08:09.153614] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:07.194 12:08:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:16:07.194 00:16:07.194 real 0m11.402s 00:16:07.194 user 0m18.226s 00:16:07.194 sys 0m2.069s 00:16:07.194 12:08:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:07.194 12:08:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.194 ************************************ 00:16:07.194 END TEST raid5f_state_function_test 00:16:07.194 ************************************ 00:16:07.194 12:08:10 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:16:07.194 12:08:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:07.194 12:08:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:07.194 12:08:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:07.194 ************************************ 00:16:07.194 START TEST raid5f_state_function_test_sb 00:16:07.194 ************************************ 00:16:07.194 12:08:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:16:07.194 12:08:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:07.194 12:08:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:07.194 12:08:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:07.194 12:08:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:07.194 12:08:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:07.194 12:08:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:07.194 12:08:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:07.194 12:08:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:07.194 12:08:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:07.194 12:08:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:07.194 12:08:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:07.194 12:08:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:07.194 12:08:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:07.194 12:08:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:07.194 12:08:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:07.194 12:08:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:07.194 12:08:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:07.194 12:08:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:07.194 12:08:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:07.194 12:08:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:07.194 12:08:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:07.194 12:08:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:07.194 12:08:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:07.194 12:08:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:07.194 12:08:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:07.194 12:08:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:07.194 12:08:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:07.194 12:08:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:07.194 12:08:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:07.194 12:08:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83362 00:16:07.194 12:08:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:07.194 12:08:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83362' 00:16:07.194 Process raid pid: 83362 00:16:07.194 12:08:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83362 00:16:07.194 12:08:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 83362 ']' 00:16:07.194 12:08:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:07.194 12:08:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:07.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:07.194 12:08:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:07.194 12:08:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:07.194 12:08:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.195 [2024-11-19 12:08:10.363068] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:16:07.195 [2024-11-19 12:08:10.363196] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:07.195 [2024-11-19 12:08:10.540244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:07.453 [2024-11-19 12:08:10.652234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:07.713 [2024-11-19 12:08:10.844336] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:07.713 [2024-11-19 12:08:10.844374] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:07.974 12:08:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:07.974 12:08:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:07.974 12:08:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:07.974 12:08:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.974 12:08:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.974 [2024-11-19 12:08:11.187582] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:07.974 [2024-11-19 12:08:11.187634] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:07.974 [2024-11-19 12:08:11.187648] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:07.974 [2024-11-19 12:08:11.187658] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:07.974 [2024-11-19 12:08:11.187665] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:07.974 [2024-11-19 12:08:11.187673] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:07.974 [2024-11-19 12:08:11.187679] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:07.974 [2024-11-19 12:08:11.187687] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:07.974 12:08:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.974 12:08:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:07.974 12:08:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:07.974 12:08:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:07.974 12:08:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:07.974 12:08:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:07.974 12:08:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:07.974 12:08:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:07.974 12:08:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:07.974 12:08:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:07.974 12:08:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:07.974 12:08:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.974 12:08:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:07.974 12:08:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.974 12:08:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.974 12:08:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.974 12:08:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:07.974 "name": "Existed_Raid", 00:16:07.974 "uuid": "46115290-7ef9-4cae-8ff6-b4fb56cc4588", 00:16:07.974 "strip_size_kb": 64, 00:16:07.974 "state": "configuring", 00:16:07.974 "raid_level": "raid5f", 00:16:07.974 "superblock": true, 00:16:07.974 "num_base_bdevs": 4, 00:16:07.974 "num_base_bdevs_discovered": 0, 00:16:07.974 "num_base_bdevs_operational": 4, 00:16:07.974 "base_bdevs_list": [ 00:16:07.974 { 00:16:07.974 "name": "BaseBdev1", 00:16:07.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.974 "is_configured": false, 00:16:07.974 "data_offset": 0, 00:16:07.974 "data_size": 0 00:16:07.974 }, 00:16:07.974 { 00:16:07.974 "name": "BaseBdev2", 00:16:07.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.974 "is_configured": false, 00:16:07.974 "data_offset": 0, 00:16:07.974 "data_size": 0 00:16:07.974 }, 00:16:07.974 { 00:16:07.974 "name": "BaseBdev3", 00:16:07.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.974 "is_configured": false, 00:16:07.974 "data_offset": 0, 00:16:07.974 "data_size": 0 00:16:07.974 }, 00:16:07.974 { 00:16:07.974 "name": "BaseBdev4", 00:16:07.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.974 "is_configured": false, 00:16:07.974 "data_offset": 0, 00:16:07.974 "data_size": 0 00:16:07.974 } 00:16:07.974 ] 00:16:07.974 }' 00:16:07.974 12:08:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:07.974 12:08:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.234 12:08:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:08.234 12:08:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.234 12:08:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.234 [2024-11-19 12:08:11.574878] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:08.234 [2024-11-19 12:08:11.574918] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:08.234 12:08:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.234 12:08:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:08.234 12:08:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.234 12:08:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.234 [2024-11-19 12:08:11.586863] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:08.234 [2024-11-19 12:08:11.586903] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:08.234 [2024-11-19 12:08:11.586911] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:08.234 [2024-11-19 12:08:11.586920] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:08.234 [2024-11-19 12:08:11.586926] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:08.234 [2024-11-19 12:08:11.586934] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:08.234 [2024-11-19 12:08:11.586940] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:08.234 [2024-11-19 12:08:11.586948] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:08.234 12:08:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.234 12:08:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:08.234 12:08:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.234 12:08:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.494 [2024-11-19 12:08:11.634505] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:08.494 BaseBdev1 00:16:08.494 12:08:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.494 12:08:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:08.494 12:08:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:08.494 12:08:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:08.494 12:08:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:08.494 12:08:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:08.494 12:08:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:08.494 12:08:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:08.494 12:08:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.494 12:08:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.494 12:08:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.494 12:08:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:08.494 12:08:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.494 12:08:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.494 [ 00:16:08.494 { 00:16:08.494 "name": "BaseBdev1", 00:16:08.494 "aliases": [ 00:16:08.494 "bf1d58c9-688a-4e94-86ca-6c8af82f9763" 00:16:08.494 ], 00:16:08.494 "product_name": "Malloc disk", 00:16:08.494 "block_size": 512, 00:16:08.494 "num_blocks": 65536, 00:16:08.494 "uuid": "bf1d58c9-688a-4e94-86ca-6c8af82f9763", 00:16:08.494 "assigned_rate_limits": { 00:16:08.494 "rw_ios_per_sec": 0, 00:16:08.494 "rw_mbytes_per_sec": 0, 00:16:08.494 "r_mbytes_per_sec": 0, 00:16:08.494 "w_mbytes_per_sec": 0 00:16:08.494 }, 00:16:08.494 "claimed": true, 00:16:08.494 "claim_type": "exclusive_write", 00:16:08.494 "zoned": false, 00:16:08.494 "supported_io_types": { 00:16:08.494 "read": true, 00:16:08.494 "write": true, 00:16:08.494 "unmap": true, 00:16:08.494 "flush": true, 00:16:08.494 "reset": true, 00:16:08.494 "nvme_admin": false, 00:16:08.494 "nvme_io": false, 00:16:08.494 "nvme_io_md": false, 00:16:08.494 "write_zeroes": true, 00:16:08.494 "zcopy": true, 00:16:08.494 "get_zone_info": false, 00:16:08.494 "zone_management": false, 00:16:08.494 "zone_append": false, 00:16:08.494 "compare": false, 00:16:08.494 "compare_and_write": false, 00:16:08.494 "abort": true, 00:16:08.494 "seek_hole": false, 00:16:08.494 "seek_data": false, 00:16:08.494 "copy": true, 00:16:08.494 "nvme_iov_md": false 00:16:08.494 }, 00:16:08.494 "memory_domains": [ 00:16:08.494 { 00:16:08.494 "dma_device_id": "system", 00:16:08.494 "dma_device_type": 1 00:16:08.494 }, 00:16:08.494 { 00:16:08.494 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:08.494 "dma_device_type": 2 00:16:08.494 } 00:16:08.494 ], 00:16:08.494 "driver_specific": {} 00:16:08.494 } 00:16:08.494 ] 00:16:08.494 12:08:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.494 12:08:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:08.494 12:08:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:08.494 12:08:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:08.494 12:08:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:08.494 12:08:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:08.494 12:08:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:08.494 12:08:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:08.494 12:08:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:08.494 12:08:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:08.494 12:08:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:08.494 12:08:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:08.494 12:08:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.494 12:08:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:08.494 12:08:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.494 12:08:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.494 12:08:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.494 12:08:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:08.494 "name": "Existed_Raid", 00:16:08.494 "uuid": "4f7f16ae-7f6c-44d6-9273-ded8eed4ec65", 00:16:08.494 "strip_size_kb": 64, 00:16:08.494 "state": "configuring", 00:16:08.494 "raid_level": "raid5f", 00:16:08.494 "superblock": true, 00:16:08.494 "num_base_bdevs": 4, 00:16:08.494 "num_base_bdevs_discovered": 1, 00:16:08.494 "num_base_bdevs_operational": 4, 00:16:08.494 "base_bdevs_list": [ 00:16:08.494 { 00:16:08.494 "name": "BaseBdev1", 00:16:08.494 "uuid": "bf1d58c9-688a-4e94-86ca-6c8af82f9763", 00:16:08.494 "is_configured": true, 00:16:08.494 "data_offset": 2048, 00:16:08.494 "data_size": 63488 00:16:08.494 }, 00:16:08.494 { 00:16:08.494 "name": "BaseBdev2", 00:16:08.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.494 "is_configured": false, 00:16:08.494 "data_offset": 0, 00:16:08.494 "data_size": 0 00:16:08.494 }, 00:16:08.494 { 00:16:08.494 "name": "BaseBdev3", 00:16:08.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.495 "is_configured": false, 00:16:08.495 "data_offset": 0, 00:16:08.495 "data_size": 0 00:16:08.495 }, 00:16:08.495 { 00:16:08.495 "name": "BaseBdev4", 00:16:08.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.495 "is_configured": false, 00:16:08.495 "data_offset": 0, 00:16:08.495 "data_size": 0 00:16:08.495 } 00:16:08.495 ] 00:16:08.495 }' 00:16:08.495 12:08:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:08.495 12:08:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.754 12:08:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:08.754 12:08:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.754 12:08:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.754 [2024-11-19 12:08:12.125671] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:08.754 [2024-11-19 12:08:12.125716] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:09.013 12:08:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.013 12:08:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:09.013 12:08:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.013 12:08:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.013 [2024-11-19 12:08:12.137716] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:09.013 [2024-11-19 12:08:12.139622] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:09.013 [2024-11-19 12:08:12.139698] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:09.013 [2024-11-19 12:08:12.139726] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:09.013 [2024-11-19 12:08:12.139751] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:09.013 [2024-11-19 12:08:12.139769] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:09.013 [2024-11-19 12:08:12.139788] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:09.013 12:08:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.013 12:08:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:09.013 12:08:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:09.013 12:08:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:09.013 12:08:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:09.013 12:08:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:09.013 12:08:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:09.013 12:08:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:09.014 12:08:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:09.014 12:08:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:09.014 12:08:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:09.014 12:08:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:09.014 12:08:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:09.014 12:08:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.014 12:08:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.014 12:08:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.014 12:08:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:09.014 12:08:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.014 12:08:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:09.014 "name": "Existed_Raid", 00:16:09.014 "uuid": "70ae551d-311a-4685-bca9-806f062df7e1", 00:16:09.014 "strip_size_kb": 64, 00:16:09.014 "state": "configuring", 00:16:09.014 "raid_level": "raid5f", 00:16:09.014 "superblock": true, 00:16:09.014 "num_base_bdevs": 4, 00:16:09.014 "num_base_bdevs_discovered": 1, 00:16:09.014 "num_base_bdevs_operational": 4, 00:16:09.014 "base_bdevs_list": [ 00:16:09.014 { 00:16:09.014 "name": "BaseBdev1", 00:16:09.014 "uuid": "bf1d58c9-688a-4e94-86ca-6c8af82f9763", 00:16:09.014 "is_configured": true, 00:16:09.014 "data_offset": 2048, 00:16:09.014 "data_size": 63488 00:16:09.014 }, 00:16:09.014 { 00:16:09.014 "name": "BaseBdev2", 00:16:09.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.014 "is_configured": false, 00:16:09.014 "data_offset": 0, 00:16:09.014 "data_size": 0 00:16:09.014 }, 00:16:09.014 { 00:16:09.014 "name": "BaseBdev3", 00:16:09.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.014 "is_configured": false, 00:16:09.014 "data_offset": 0, 00:16:09.014 "data_size": 0 00:16:09.014 }, 00:16:09.014 { 00:16:09.014 "name": "BaseBdev4", 00:16:09.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.014 "is_configured": false, 00:16:09.014 "data_offset": 0, 00:16:09.014 "data_size": 0 00:16:09.014 } 00:16:09.014 ] 00:16:09.014 }' 00:16:09.014 12:08:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:09.014 12:08:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.273 12:08:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:09.273 12:08:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.273 12:08:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.273 [2024-11-19 12:08:12.624976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:09.273 BaseBdev2 00:16:09.273 12:08:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.273 12:08:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:09.273 12:08:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:09.273 12:08:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:09.273 12:08:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:09.273 12:08:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:09.273 12:08:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:09.273 12:08:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:09.273 12:08:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.273 12:08:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.273 12:08:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.273 12:08:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:09.273 12:08:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.273 12:08:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.533 [ 00:16:09.533 { 00:16:09.533 "name": "BaseBdev2", 00:16:09.533 "aliases": [ 00:16:09.533 "85dc0e70-2858-4906-becf-8a4aa6531593" 00:16:09.533 ], 00:16:09.533 "product_name": "Malloc disk", 00:16:09.533 "block_size": 512, 00:16:09.533 "num_blocks": 65536, 00:16:09.533 "uuid": "85dc0e70-2858-4906-becf-8a4aa6531593", 00:16:09.533 "assigned_rate_limits": { 00:16:09.533 "rw_ios_per_sec": 0, 00:16:09.533 "rw_mbytes_per_sec": 0, 00:16:09.533 "r_mbytes_per_sec": 0, 00:16:09.533 "w_mbytes_per_sec": 0 00:16:09.533 }, 00:16:09.533 "claimed": true, 00:16:09.533 "claim_type": "exclusive_write", 00:16:09.533 "zoned": false, 00:16:09.533 "supported_io_types": { 00:16:09.533 "read": true, 00:16:09.533 "write": true, 00:16:09.533 "unmap": true, 00:16:09.533 "flush": true, 00:16:09.533 "reset": true, 00:16:09.533 "nvme_admin": false, 00:16:09.533 "nvme_io": false, 00:16:09.533 "nvme_io_md": false, 00:16:09.533 "write_zeroes": true, 00:16:09.533 "zcopy": true, 00:16:09.533 "get_zone_info": false, 00:16:09.533 "zone_management": false, 00:16:09.533 "zone_append": false, 00:16:09.533 "compare": false, 00:16:09.533 "compare_and_write": false, 00:16:09.533 "abort": true, 00:16:09.533 "seek_hole": false, 00:16:09.533 "seek_data": false, 00:16:09.533 "copy": true, 00:16:09.533 "nvme_iov_md": false 00:16:09.533 }, 00:16:09.533 "memory_domains": [ 00:16:09.534 { 00:16:09.534 "dma_device_id": "system", 00:16:09.534 "dma_device_type": 1 00:16:09.534 }, 00:16:09.534 { 00:16:09.534 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:09.534 "dma_device_type": 2 00:16:09.534 } 00:16:09.534 ], 00:16:09.534 "driver_specific": {} 00:16:09.534 } 00:16:09.534 ] 00:16:09.534 12:08:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.534 12:08:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:09.534 12:08:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:09.534 12:08:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:09.534 12:08:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:09.534 12:08:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:09.534 12:08:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:09.534 12:08:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:09.534 12:08:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:09.534 12:08:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:09.534 12:08:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:09.534 12:08:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:09.534 12:08:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:09.534 12:08:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:09.534 12:08:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.534 12:08:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:09.534 12:08:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.534 12:08:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.534 12:08:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.534 12:08:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:09.534 "name": "Existed_Raid", 00:16:09.534 "uuid": "70ae551d-311a-4685-bca9-806f062df7e1", 00:16:09.534 "strip_size_kb": 64, 00:16:09.534 "state": "configuring", 00:16:09.534 "raid_level": "raid5f", 00:16:09.534 "superblock": true, 00:16:09.534 "num_base_bdevs": 4, 00:16:09.534 "num_base_bdevs_discovered": 2, 00:16:09.534 "num_base_bdevs_operational": 4, 00:16:09.534 "base_bdevs_list": [ 00:16:09.534 { 00:16:09.534 "name": "BaseBdev1", 00:16:09.534 "uuid": "bf1d58c9-688a-4e94-86ca-6c8af82f9763", 00:16:09.534 "is_configured": true, 00:16:09.534 "data_offset": 2048, 00:16:09.534 "data_size": 63488 00:16:09.534 }, 00:16:09.534 { 00:16:09.534 "name": "BaseBdev2", 00:16:09.534 "uuid": "85dc0e70-2858-4906-becf-8a4aa6531593", 00:16:09.534 "is_configured": true, 00:16:09.534 "data_offset": 2048, 00:16:09.534 "data_size": 63488 00:16:09.534 }, 00:16:09.534 { 00:16:09.534 "name": "BaseBdev3", 00:16:09.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.534 "is_configured": false, 00:16:09.534 "data_offset": 0, 00:16:09.534 "data_size": 0 00:16:09.534 }, 00:16:09.534 { 00:16:09.534 "name": "BaseBdev4", 00:16:09.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.534 "is_configured": false, 00:16:09.534 "data_offset": 0, 00:16:09.534 "data_size": 0 00:16:09.534 } 00:16:09.534 ] 00:16:09.534 }' 00:16:09.534 12:08:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:09.534 12:08:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.794 12:08:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:09.794 12:08:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.794 12:08:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.794 [2024-11-19 12:08:13.124437] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:09.794 BaseBdev3 00:16:09.794 12:08:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.794 12:08:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:09.794 12:08:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:09.794 12:08:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:09.794 12:08:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:09.794 12:08:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:09.794 12:08:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:09.794 12:08:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:09.794 12:08:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.794 12:08:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.794 12:08:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.794 12:08:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:09.794 12:08:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.794 12:08:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.794 [ 00:16:09.794 { 00:16:09.794 "name": "BaseBdev3", 00:16:09.794 "aliases": [ 00:16:09.794 "1fca0062-e91c-460e-9ffe-b0b88fecb708" 00:16:09.794 ], 00:16:09.794 "product_name": "Malloc disk", 00:16:09.794 "block_size": 512, 00:16:09.794 "num_blocks": 65536, 00:16:09.794 "uuid": "1fca0062-e91c-460e-9ffe-b0b88fecb708", 00:16:09.794 "assigned_rate_limits": { 00:16:09.794 "rw_ios_per_sec": 0, 00:16:09.794 "rw_mbytes_per_sec": 0, 00:16:09.794 "r_mbytes_per_sec": 0, 00:16:09.794 "w_mbytes_per_sec": 0 00:16:09.794 }, 00:16:09.794 "claimed": true, 00:16:09.794 "claim_type": "exclusive_write", 00:16:09.794 "zoned": false, 00:16:09.794 "supported_io_types": { 00:16:09.794 "read": true, 00:16:09.794 "write": true, 00:16:09.794 "unmap": true, 00:16:09.794 "flush": true, 00:16:09.794 "reset": true, 00:16:09.794 "nvme_admin": false, 00:16:09.794 "nvme_io": false, 00:16:09.794 "nvme_io_md": false, 00:16:09.794 "write_zeroes": true, 00:16:09.794 "zcopy": true, 00:16:09.794 "get_zone_info": false, 00:16:09.794 "zone_management": false, 00:16:09.794 "zone_append": false, 00:16:09.794 "compare": false, 00:16:09.794 "compare_and_write": false, 00:16:09.794 "abort": true, 00:16:09.794 "seek_hole": false, 00:16:09.794 "seek_data": false, 00:16:09.794 "copy": true, 00:16:09.794 "nvme_iov_md": false 00:16:09.794 }, 00:16:09.794 "memory_domains": [ 00:16:09.794 { 00:16:09.794 "dma_device_id": "system", 00:16:09.794 "dma_device_type": 1 00:16:09.794 }, 00:16:09.794 { 00:16:09.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:09.794 "dma_device_type": 2 00:16:09.794 } 00:16:09.794 ], 00:16:09.794 "driver_specific": {} 00:16:09.794 } 00:16:09.794 ] 00:16:09.794 12:08:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.794 12:08:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:09.794 12:08:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:09.794 12:08:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:09.795 12:08:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:09.795 12:08:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:09.795 12:08:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:09.795 12:08:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:09.795 12:08:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:09.795 12:08:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:09.795 12:08:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:09.795 12:08:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:09.795 12:08:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:09.795 12:08:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.054 12:08:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.054 12:08:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:10.054 12:08:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.054 12:08:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.054 12:08:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.054 12:08:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.054 "name": "Existed_Raid", 00:16:10.054 "uuid": "70ae551d-311a-4685-bca9-806f062df7e1", 00:16:10.054 "strip_size_kb": 64, 00:16:10.054 "state": "configuring", 00:16:10.054 "raid_level": "raid5f", 00:16:10.054 "superblock": true, 00:16:10.054 "num_base_bdevs": 4, 00:16:10.054 "num_base_bdevs_discovered": 3, 00:16:10.054 "num_base_bdevs_operational": 4, 00:16:10.054 "base_bdevs_list": [ 00:16:10.054 { 00:16:10.054 "name": "BaseBdev1", 00:16:10.054 "uuid": "bf1d58c9-688a-4e94-86ca-6c8af82f9763", 00:16:10.054 "is_configured": true, 00:16:10.054 "data_offset": 2048, 00:16:10.054 "data_size": 63488 00:16:10.054 }, 00:16:10.054 { 00:16:10.054 "name": "BaseBdev2", 00:16:10.054 "uuid": "85dc0e70-2858-4906-becf-8a4aa6531593", 00:16:10.054 "is_configured": true, 00:16:10.054 "data_offset": 2048, 00:16:10.054 "data_size": 63488 00:16:10.054 }, 00:16:10.054 { 00:16:10.054 "name": "BaseBdev3", 00:16:10.054 "uuid": "1fca0062-e91c-460e-9ffe-b0b88fecb708", 00:16:10.054 "is_configured": true, 00:16:10.054 "data_offset": 2048, 00:16:10.054 "data_size": 63488 00:16:10.054 }, 00:16:10.054 { 00:16:10.054 "name": "BaseBdev4", 00:16:10.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.054 "is_configured": false, 00:16:10.054 "data_offset": 0, 00:16:10.054 "data_size": 0 00:16:10.054 } 00:16:10.055 ] 00:16:10.055 }' 00:16:10.055 12:08:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.055 12:08:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.314 12:08:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:10.314 12:08:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.314 12:08:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.314 [2024-11-19 12:08:13.615258] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:10.314 BaseBdev4 00:16:10.314 [2024-11-19 12:08:13.615610] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:10.314 [2024-11-19 12:08:13.615629] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:10.314 [2024-11-19 12:08:13.615898] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:10.314 12:08:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.314 12:08:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:10.314 12:08:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:10.314 12:08:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:10.314 12:08:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:10.314 12:08:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:10.314 12:08:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:10.314 12:08:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:10.314 12:08:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.314 12:08:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.314 [2024-11-19 12:08:13.622985] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:10.314 [2024-11-19 12:08:13.623061] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:10.314 [2024-11-19 12:08:13.623345] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:10.314 12:08:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.314 12:08:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:10.314 12:08:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.314 12:08:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.314 [ 00:16:10.314 { 00:16:10.314 "name": "BaseBdev4", 00:16:10.314 "aliases": [ 00:16:10.314 "9aab1c6d-2e49-487d-b074-248cb73bd633" 00:16:10.314 ], 00:16:10.314 "product_name": "Malloc disk", 00:16:10.314 "block_size": 512, 00:16:10.314 "num_blocks": 65536, 00:16:10.314 "uuid": "9aab1c6d-2e49-487d-b074-248cb73bd633", 00:16:10.314 "assigned_rate_limits": { 00:16:10.314 "rw_ios_per_sec": 0, 00:16:10.314 "rw_mbytes_per_sec": 0, 00:16:10.314 "r_mbytes_per_sec": 0, 00:16:10.314 "w_mbytes_per_sec": 0 00:16:10.314 }, 00:16:10.314 "claimed": true, 00:16:10.314 "claim_type": "exclusive_write", 00:16:10.314 "zoned": false, 00:16:10.314 "supported_io_types": { 00:16:10.314 "read": true, 00:16:10.314 "write": true, 00:16:10.314 "unmap": true, 00:16:10.314 "flush": true, 00:16:10.314 "reset": true, 00:16:10.314 "nvme_admin": false, 00:16:10.314 "nvme_io": false, 00:16:10.314 "nvme_io_md": false, 00:16:10.314 "write_zeroes": true, 00:16:10.314 "zcopy": true, 00:16:10.314 "get_zone_info": false, 00:16:10.314 "zone_management": false, 00:16:10.314 "zone_append": false, 00:16:10.314 "compare": false, 00:16:10.314 "compare_and_write": false, 00:16:10.314 "abort": true, 00:16:10.314 "seek_hole": false, 00:16:10.314 "seek_data": false, 00:16:10.314 "copy": true, 00:16:10.314 "nvme_iov_md": false 00:16:10.314 }, 00:16:10.314 "memory_domains": [ 00:16:10.314 { 00:16:10.314 "dma_device_id": "system", 00:16:10.314 "dma_device_type": 1 00:16:10.314 }, 00:16:10.314 { 00:16:10.314 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:10.314 "dma_device_type": 2 00:16:10.314 } 00:16:10.314 ], 00:16:10.314 "driver_specific": {} 00:16:10.314 } 00:16:10.314 ] 00:16:10.314 12:08:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.314 12:08:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:10.314 12:08:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:10.314 12:08:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:10.314 12:08:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:10.314 12:08:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:10.314 12:08:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:10.314 12:08:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:10.315 12:08:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:10.315 12:08:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:10.315 12:08:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.315 12:08:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.315 12:08:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.315 12:08:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.315 12:08:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.315 12:08:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.315 12:08:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.315 12:08:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:10.315 12:08:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.574 12:08:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.574 "name": "Existed_Raid", 00:16:10.574 "uuid": "70ae551d-311a-4685-bca9-806f062df7e1", 00:16:10.574 "strip_size_kb": 64, 00:16:10.574 "state": "online", 00:16:10.574 "raid_level": "raid5f", 00:16:10.574 "superblock": true, 00:16:10.574 "num_base_bdevs": 4, 00:16:10.574 "num_base_bdevs_discovered": 4, 00:16:10.574 "num_base_bdevs_operational": 4, 00:16:10.574 "base_bdevs_list": [ 00:16:10.574 { 00:16:10.574 "name": "BaseBdev1", 00:16:10.574 "uuid": "bf1d58c9-688a-4e94-86ca-6c8af82f9763", 00:16:10.574 "is_configured": true, 00:16:10.574 "data_offset": 2048, 00:16:10.574 "data_size": 63488 00:16:10.574 }, 00:16:10.574 { 00:16:10.574 "name": "BaseBdev2", 00:16:10.574 "uuid": "85dc0e70-2858-4906-becf-8a4aa6531593", 00:16:10.574 "is_configured": true, 00:16:10.574 "data_offset": 2048, 00:16:10.574 "data_size": 63488 00:16:10.574 }, 00:16:10.574 { 00:16:10.574 "name": "BaseBdev3", 00:16:10.574 "uuid": "1fca0062-e91c-460e-9ffe-b0b88fecb708", 00:16:10.574 "is_configured": true, 00:16:10.574 "data_offset": 2048, 00:16:10.574 "data_size": 63488 00:16:10.574 }, 00:16:10.574 { 00:16:10.574 "name": "BaseBdev4", 00:16:10.574 "uuid": "9aab1c6d-2e49-487d-b074-248cb73bd633", 00:16:10.574 "is_configured": true, 00:16:10.574 "data_offset": 2048, 00:16:10.574 "data_size": 63488 00:16:10.574 } 00:16:10.574 ] 00:16:10.574 }' 00:16:10.574 12:08:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.574 12:08:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.833 12:08:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:10.833 12:08:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:10.833 12:08:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:10.833 12:08:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:10.833 12:08:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:10.833 12:08:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:10.833 12:08:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:10.833 12:08:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:10.833 12:08:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.833 12:08:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.833 [2024-11-19 12:08:14.102844] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:10.833 12:08:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.833 12:08:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:10.833 "name": "Existed_Raid", 00:16:10.833 "aliases": [ 00:16:10.833 "70ae551d-311a-4685-bca9-806f062df7e1" 00:16:10.833 ], 00:16:10.833 "product_name": "Raid Volume", 00:16:10.833 "block_size": 512, 00:16:10.833 "num_blocks": 190464, 00:16:10.833 "uuid": "70ae551d-311a-4685-bca9-806f062df7e1", 00:16:10.833 "assigned_rate_limits": { 00:16:10.833 "rw_ios_per_sec": 0, 00:16:10.833 "rw_mbytes_per_sec": 0, 00:16:10.833 "r_mbytes_per_sec": 0, 00:16:10.833 "w_mbytes_per_sec": 0 00:16:10.833 }, 00:16:10.833 "claimed": false, 00:16:10.833 "zoned": false, 00:16:10.833 "supported_io_types": { 00:16:10.833 "read": true, 00:16:10.833 "write": true, 00:16:10.833 "unmap": false, 00:16:10.833 "flush": false, 00:16:10.833 "reset": true, 00:16:10.833 "nvme_admin": false, 00:16:10.833 "nvme_io": false, 00:16:10.833 "nvme_io_md": false, 00:16:10.833 "write_zeroes": true, 00:16:10.833 "zcopy": false, 00:16:10.833 "get_zone_info": false, 00:16:10.833 "zone_management": false, 00:16:10.833 "zone_append": false, 00:16:10.833 "compare": false, 00:16:10.833 "compare_and_write": false, 00:16:10.833 "abort": false, 00:16:10.833 "seek_hole": false, 00:16:10.833 "seek_data": false, 00:16:10.833 "copy": false, 00:16:10.833 "nvme_iov_md": false 00:16:10.833 }, 00:16:10.833 "driver_specific": { 00:16:10.833 "raid": { 00:16:10.833 "uuid": "70ae551d-311a-4685-bca9-806f062df7e1", 00:16:10.833 "strip_size_kb": 64, 00:16:10.833 "state": "online", 00:16:10.833 "raid_level": "raid5f", 00:16:10.833 "superblock": true, 00:16:10.833 "num_base_bdevs": 4, 00:16:10.833 "num_base_bdevs_discovered": 4, 00:16:10.833 "num_base_bdevs_operational": 4, 00:16:10.833 "base_bdevs_list": [ 00:16:10.833 { 00:16:10.833 "name": "BaseBdev1", 00:16:10.833 "uuid": "bf1d58c9-688a-4e94-86ca-6c8af82f9763", 00:16:10.833 "is_configured": true, 00:16:10.833 "data_offset": 2048, 00:16:10.833 "data_size": 63488 00:16:10.833 }, 00:16:10.833 { 00:16:10.833 "name": "BaseBdev2", 00:16:10.833 "uuid": "85dc0e70-2858-4906-becf-8a4aa6531593", 00:16:10.833 "is_configured": true, 00:16:10.833 "data_offset": 2048, 00:16:10.833 "data_size": 63488 00:16:10.833 }, 00:16:10.833 { 00:16:10.833 "name": "BaseBdev3", 00:16:10.833 "uuid": "1fca0062-e91c-460e-9ffe-b0b88fecb708", 00:16:10.833 "is_configured": true, 00:16:10.833 "data_offset": 2048, 00:16:10.833 "data_size": 63488 00:16:10.833 }, 00:16:10.833 { 00:16:10.833 "name": "BaseBdev4", 00:16:10.833 "uuid": "9aab1c6d-2e49-487d-b074-248cb73bd633", 00:16:10.833 "is_configured": true, 00:16:10.833 "data_offset": 2048, 00:16:10.833 "data_size": 63488 00:16:10.833 } 00:16:10.833 ] 00:16:10.833 } 00:16:10.833 } 00:16:10.833 }' 00:16:10.833 12:08:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:10.833 12:08:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:10.833 BaseBdev2 00:16:10.833 BaseBdev3 00:16:10.833 BaseBdev4' 00:16:10.833 12:08:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:11.093 12:08:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:11.093 12:08:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:11.093 12:08:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:11.093 12:08:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:11.093 12:08:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.093 12:08:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.093 12:08:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.093 12:08:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:11.093 12:08:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:11.093 12:08:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:11.093 12:08:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:11.093 12:08:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:11.093 12:08:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.093 12:08:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.093 12:08:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.093 12:08:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:11.093 12:08:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:11.093 12:08:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:11.093 12:08:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:11.093 12:08:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:11.093 12:08:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.093 12:08:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.093 12:08:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.093 12:08:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:11.093 12:08:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:11.093 12:08:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:11.093 12:08:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:11.093 12:08:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:11.093 12:08:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.093 12:08:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.093 12:08:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.093 12:08:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:11.093 12:08:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:11.093 12:08:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:11.093 12:08:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.093 12:08:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.093 [2024-11-19 12:08:14.406180] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:11.353 12:08:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.353 12:08:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:11.353 12:08:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:11.353 12:08:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:11.353 12:08:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:16:11.353 12:08:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:11.353 12:08:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:11.353 12:08:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:11.353 12:08:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:11.353 12:08:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:11.353 12:08:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:11.353 12:08:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:11.353 12:08:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:11.353 12:08:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:11.353 12:08:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:11.353 12:08:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:11.353 12:08:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.353 12:08:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:11.353 12:08:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.353 12:08:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.353 12:08:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.353 12:08:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:11.353 "name": "Existed_Raid", 00:16:11.353 "uuid": "70ae551d-311a-4685-bca9-806f062df7e1", 00:16:11.353 "strip_size_kb": 64, 00:16:11.353 "state": "online", 00:16:11.353 "raid_level": "raid5f", 00:16:11.353 "superblock": true, 00:16:11.353 "num_base_bdevs": 4, 00:16:11.353 "num_base_bdevs_discovered": 3, 00:16:11.353 "num_base_bdevs_operational": 3, 00:16:11.353 "base_bdevs_list": [ 00:16:11.353 { 00:16:11.353 "name": null, 00:16:11.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.353 "is_configured": false, 00:16:11.353 "data_offset": 0, 00:16:11.353 "data_size": 63488 00:16:11.353 }, 00:16:11.353 { 00:16:11.353 "name": "BaseBdev2", 00:16:11.353 "uuid": "85dc0e70-2858-4906-becf-8a4aa6531593", 00:16:11.353 "is_configured": true, 00:16:11.353 "data_offset": 2048, 00:16:11.353 "data_size": 63488 00:16:11.353 }, 00:16:11.353 { 00:16:11.353 "name": "BaseBdev3", 00:16:11.353 "uuid": "1fca0062-e91c-460e-9ffe-b0b88fecb708", 00:16:11.353 "is_configured": true, 00:16:11.353 "data_offset": 2048, 00:16:11.353 "data_size": 63488 00:16:11.353 }, 00:16:11.353 { 00:16:11.353 "name": "BaseBdev4", 00:16:11.353 "uuid": "9aab1c6d-2e49-487d-b074-248cb73bd633", 00:16:11.353 "is_configured": true, 00:16:11.353 "data_offset": 2048, 00:16:11.353 "data_size": 63488 00:16:11.353 } 00:16:11.353 ] 00:16:11.353 }' 00:16:11.353 12:08:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:11.353 12:08:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.612 12:08:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:11.612 12:08:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:11.612 12:08:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.612 12:08:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.612 12:08:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.612 12:08:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:11.612 12:08:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.612 12:08:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:11.612 12:08:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:11.612 12:08:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:11.612 12:08:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.612 12:08:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.612 [2024-11-19 12:08:14.974608] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:11.612 [2024-11-19 12:08:14.974817] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:11.873 [2024-11-19 12:08:15.064969] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:11.873 12:08:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.873 12:08:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:11.873 12:08:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:11.873 12:08:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.873 12:08:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.873 12:08:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.873 12:08:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:11.873 12:08:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.873 12:08:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:11.873 12:08:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:11.873 12:08:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:11.873 12:08:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.873 12:08:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.873 [2024-11-19 12:08:15.120876] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:11.873 12:08:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.873 12:08:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:11.873 12:08:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:11.873 12:08:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:11.873 12:08:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.873 12:08:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.873 12:08:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.873 12:08:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.133 12:08:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:12.133 12:08:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:12.133 12:08:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:12.133 12:08:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.133 12:08:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.133 [2024-11-19 12:08:15.269370] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:12.133 [2024-11-19 12:08:15.269418] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:12.133 12:08:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.133 12:08:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:12.133 12:08:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:12.133 12:08:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.133 12:08:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:12.133 12:08:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.133 12:08:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.133 12:08:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.133 12:08:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:12.133 12:08:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:12.134 12:08:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:12.134 12:08:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:12.134 12:08:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:12.134 12:08:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:12.134 12:08:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.134 12:08:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.134 BaseBdev2 00:16:12.134 12:08:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.134 12:08:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:12.134 12:08:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:12.134 12:08:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:12.134 12:08:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:12.134 12:08:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:12.134 12:08:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:12.134 12:08:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:12.134 12:08:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.134 12:08:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.134 12:08:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.134 12:08:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:12.134 12:08:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.134 12:08:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.134 [ 00:16:12.134 { 00:16:12.134 "name": "BaseBdev2", 00:16:12.134 "aliases": [ 00:16:12.134 "3f626998-a0cd-43b4-97d6-8f13dd631f37" 00:16:12.134 ], 00:16:12.134 "product_name": "Malloc disk", 00:16:12.134 "block_size": 512, 00:16:12.134 "num_blocks": 65536, 00:16:12.134 "uuid": "3f626998-a0cd-43b4-97d6-8f13dd631f37", 00:16:12.134 "assigned_rate_limits": { 00:16:12.134 "rw_ios_per_sec": 0, 00:16:12.134 "rw_mbytes_per_sec": 0, 00:16:12.134 "r_mbytes_per_sec": 0, 00:16:12.134 "w_mbytes_per_sec": 0 00:16:12.134 }, 00:16:12.134 "claimed": false, 00:16:12.134 "zoned": false, 00:16:12.134 "supported_io_types": { 00:16:12.134 "read": true, 00:16:12.134 "write": true, 00:16:12.134 "unmap": true, 00:16:12.134 "flush": true, 00:16:12.134 "reset": true, 00:16:12.134 "nvme_admin": false, 00:16:12.134 "nvme_io": false, 00:16:12.134 "nvme_io_md": false, 00:16:12.134 "write_zeroes": true, 00:16:12.134 "zcopy": true, 00:16:12.134 "get_zone_info": false, 00:16:12.134 "zone_management": false, 00:16:12.134 "zone_append": false, 00:16:12.134 "compare": false, 00:16:12.134 "compare_and_write": false, 00:16:12.134 "abort": true, 00:16:12.134 "seek_hole": false, 00:16:12.134 "seek_data": false, 00:16:12.134 "copy": true, 00:16:12.134 "nvme_iov_md": false 00:16:12.134 }, 00:16:12.134 "memory_domains": [ 00:16:12.134 { 00:16:12.134 "dma_device_id": "system", 00:16:12.134 "dma_device_type": 1 00:16:12.134 }, 00:16:12.134 { 00:16:12.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:12.134 "dma_device_type": 2 00:16:12.134 } 00:16:12.134 ], 00:16:12.134 "driver_specific": {} 00:16:12.134 } 00:16:12.134 ] 00:16:12.134 12:08:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.134 12:08:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:12.134 12:08:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:12.134 12:08:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:12.134 12:08:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:12.134 12:08:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.134 12:08:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.394 BaseBdev3 00:16:12.394 12:08:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.394 12:08:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:12.394 12:08:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:12.394 12:08:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:12.394 12:08:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:12.394 12:08:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:12.394 12:08:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:12.394 12:08:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:12.394 12:08:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.394 12:08:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.394 12:08:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.394 12:08:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:12.394 12:08:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.394 12:08:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.394 [ 00:16:12.394 { 00:16:12.394 "name": "BaseBdev3", 00:16:12.394 "aliases": [ 00:16:12.394 "ed54937d-10a9-4490-b69e-208249599e76" 00:16:12.394 ], 00:16:12.394 "product_name": "Malloc disk", 00:16:12.394 "block_size": 512, 00:16:12.394 "num_blocks": 65536, 00:16:12.394 "uuid": "ed54937d-10a9-4490-b69e-208249599e76", 00:16:12.394 "assigned_rate_limits": { 00:16:12.394 "rw_ios_per_sec": 0, 00:16:12.394 "rw_mbytes_per_sec": 0, 00:16:12.394 "r_mbytes_per_sec": 0, 00:16:12.394 "w_mbytes_per_sec": 0 00:16:12.394 }, 00:16:12.394 "claimed": false, 00:16:12.394 "zoned": false, 00:16:12.394 "supported_io_types": { 00:16:12.394 "read": true, 00:16:12.394 "write": true, 00:16:12.394 "unmap": true, 00:16:12.394 "flush": true, 00:16:12.394 "reset": true, 00:16:12.394 "nvme_admin": false, 00:16:12.394 "nvme_io": false, 00:16:12.394 "nvme_io_md": false, 00:16:12.394 "write_zeroes": true, 00:16:12.394 "zcopy": true, 00:16:12.394 "get_zone_info": false, 00:16:12.394 "zone_management": false, 00:16:12.394 "zone_append": false, 00:16:12.394 "compare": false, 00:16:12.394 "compare_and_write": false, 00:16:12.394 "abort": true, 00:16:12.395 "seek_hole": false, 00:16:12.395 "seek_data": false, 00:16:12.395 "copy": true, 00:16:12.395 "nvme_iov_md": false 00:16:12.395 }, 00:16:12.395 "memory_domains": [ 00:16:12.395 { 00:16:12.395 "dma_device_id": "system", 00:16:12.395 "dma_device_type": 1 00:16:12.395 }, 00:16:12.395 { 00:16:12.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:12.395 "dma_device_type": 2 00:16:12.395 } 00:16:12.395 ], 00:16:12.395 "driver_specific": {} 00:16:12.395 } 00:16:12.395 ] 00:16:12.395 12:08:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.395 12:08:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:12.395 12:08:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:12.395 12:08:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:12.395 12:08:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:12.395 12:08:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.395 12:08:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.395 BaseBdev4 00:16:12.395 12:08:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.395 12:08:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:12.395 12:08:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:12.395 12:08:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:12.395 12:08:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:12.395 12:08:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:12.395 12:08:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:12.395 12:08:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:12.395 12:08:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.395 12:08:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.395 12:08:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.395 12:08:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:12.395 12:08:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.395 12:08:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.395 [ 00:16:12.395 { 00:16:12.395 "name": "BaseBdev4", 00:16:12.395 "aliases": [ 00:16:12.395 "e5854ac1-afe0-47c6-877d-5ca4c82b7351" 00:16:12.395 ], 00:16:12.395 "product_name": "Malloc disk", 00:16:12.395 "block_size": 512, 00:16:12.395 "num_blocks": 65536, 00:16:12.395 "uuid": "e5854ac1-afe0-47c6-877d-5ca4c82b7351", 00:16:12.395 "assigned_rate_limits": { 00:16:12.395 "rw_ios_per_sec": 0, 00:16:12.395 "rw_mbytes_per_sec": 0, 00:16:12.395 "r_mbytes_per_sec": 0, 00:16:12.395 "w_mbytes_per_sec": 0 00:16:12.395 }, 00:16:12.395 "claimed": false, 00:16:12.395 "zoned": false, 00:16:12.395 "supported_io_types": { 00:16:12.395 "read": true, 00:16:12.395 "write": true, 00:16:12.395 "unmap": true, 00:16:12.395 "flush": true, 00:16:12.395 "reset": true, 00:16:12.395 "nvme_admin": false, 00:16:12.395 "nvme_io": false, 00:16:12.395 "nvme_io_md": false, 00:16:12.395 "write_zeroes": true, 00:16:12.395 "zcopy": true, 00:16:12.395 "get_zone_info": false, 00:16:12.395 "zone_management": false, 00:16:12.395 "zone_append": false, 00:16:12.395 "compare": false, 00:16:12.395 "compare_and_write": false, 00:16:12.395 "abort": true, 00:16:12.395 "seek_hole": false, 00:16:12.395 "seek_data": false, 00:16:12.395 "copy": true, 00:16:12.395 "nvme_iov_md": false 00:16:12.395 }, 00:16:12.395 "memory_domains": [ 00:16:12.395 { 00:16:12.395 "dma_device_id": "system", 00:16:12.395 "dma_device_type": 1 00:16:12.395 }, 00:16:12.395 { 00:16:12.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:12.395 "dma_device_type": 2 00:16:12.395 } 00:16:12.395 ], 00:16:12.395 "driver_specific": {} 00:16:12.395 } 00:16:12.395 ] 00:16:12.395 12:08:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.395 12:08:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:12.395 12:08:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:12.395 12:08:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:12.395 12:08:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:12.395 12:08:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.395 12:08:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.395 [2024-11-19 12:08:15.651312] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:12.395 [2024-11-19 12:08:15.651394] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:12.395 [2024-11-19 12:08:15.651440] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:12.395 [2024-11-19 12:08:15.653199] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:12.395 [2024-11-19 12:08:15.653302] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:12.395 12:08:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.395 12:08:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:12.395 12:08:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:12.395 12:08:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:12.395 12:08:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:12.395 12:08:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:12.395 12:08:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:12.395 12:08:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:12.395 12:08:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:12.395 12:08:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:12.395 12:08:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:12.395 12:08:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.395 12:08:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:12.395 12:08:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.395 12:08:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.395 12:08:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.395 12:08:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:12.395 "name": "Existed_Raid", 00:16:12.395 "uuid": "af11a487-5997-4043-8d4c-0880c22d9442", 00:16:12.395 "strip_size_kb": 64, 00:16:12.395 "state": "configuring", 00:16:12.395 "raid_level": "raid5f", 00:16:12.395 "superblock": true, 00:16:12.395 "num_base_bdevs": 4, 00:16:12.395 "num_base_bdevs_discovered": 3, 00:16:12.395 "num_base_bdevs_operational": 4, 00:16:12.395 "base_bdevs_list": [ 00:16:12.395 { 00:16:12.395 "name": "BaseBdev1", 00:16:12.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.395 "is_configured": false, 00:16:12.395 "data_offset": 0, 00:16:12.395 "data_size": 0 00:16:12.395 }, 00:16:12.395 { 00:16:12.395 "name": "BaseBdev2", 00:16:12.395 "uuid": "3f626998-a0cd-43b4-97d6-8f13dd631f37", 00:16:12.395 "is_configured": true, 00:16:12.395 "data_offset": 2048, 00:16:12.395 "data_size": 63488 00:16:12.395 }, 00:16:12.395 { 00:16:12.395 "name": "BaseBdev3", 00:16:12.395 "uuid": "ed54937d-10a9-4490-b69e-208249599e76", 00:16:12.395 "is_configured": true, 00:16:12.395 "data_offset": 2048, 00:16:12.395 "data_size": 63488 00:16:12.395 }, 00:16:12.395 { 00:16:12.395 "name": "BaseBdev4", 00:16:12.395 "uuid": "e5854ac1-afe0-47c6-877d-5ca4c82b7351", 00:16:12.395 "is_configured": true, 00:16:12.395 "data_offset": 2048, 00:16:12.395 "data_size": 63488 00:16:12.395 } 00:16:12.395 ] 00:16:12.395 }' 00:16:12.395 12:08:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:12.395 12:08:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.964 12:08:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:12.964 12:08:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.964 12:08:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.964 [2024-11-19 12:08:16.078571] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:12.964 12:08:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.964 12:08:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:12.964 12:08:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:12.964 12:08:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:12.964 12:08:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:12.964 12:08:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:12.964 12:08:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:12.964 12:08:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:12.964 12:08:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:12.964 12:08:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:12.964 12:08:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:12.964 12:08:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:12.964 12:08:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.964 12:08:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.964 12:08:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.964 12:08:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.964 12:08:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:12.964 "name": "Existed_Raid", 00:16:12.964 "uuid": "af11a487-5997-4043-8d4c-0880c22d9442", 00:16:12.964 "strip_size_kb": 64, 00:16:12.964 "state": "configuring", 00:16:12.964 "raid_level": "raid5f", 00:16:12.964 "superblock": true, 00:16:12.964 "num_base_bdevs": 4, 00:16:12.964 "num_base_bdevs_discovered": 2, 00:16:12.964 "num_base_bdevs_operational": 4, 00:16:12.964 "base_bdevs_list": [ 00:16:12.964 { 00:16:12.964 "name": "BaseBdev1", 00:16:12.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.964 "is_configured": false, 00:16:12.964 "data_offset": 0, 00:16:12.964 "data_size": 0 00:16:12.964 }, 00:16:12.964 { 00:16:12.964 "name": null, 00:16:12.964 "uuid": "3f626998-a0cd-43b4-97d6-8f13dd631f37", 00:16:12.964 "is_configured": false, 00:16:12.964 "data_offset": 0, 00:16:12.964 "data_size": 63488 00:16:12.964 }, 00:16:12.964 { 00:16:12.964 "name": "BaseBdev3", 00:16:12.964 "uuid": "ed54937d-10a9-4490-b69e-208249599e76", 00:16:12.964 "is_configured": true, 00:16:12.964 "data_offset": 2048, 00:16:12.964 "data_size": 63488 00:16:12.964 }, 00:16:12.964 { 00:16:12.964 "name": "BaseBdev4", 00:16:12.964 "uuid": "e5854ac1-afe0-47c6-877d-5ca4c82b7351", 00:16:12.964 "is_configured": true, 00:16:12.964 "data_offset": 2048, 00:16:12.964 "data_size": 63488 00:16:12.964 } 00:16:12.964 ] 00:16:12.964 }' 00:16:12.964 12:08:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:12.964 12:08:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.224 12:08:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:13.224 12:08:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.225 12:08:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.225 12:08:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.225 12:08:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.225 12:08:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:13.225 12:08:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:13.225 12:08:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.225 12:08:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.225 [2024-11-19 12:08:16.545665] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:13.225 BaseBdev1 00:16:13.225 12:08:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.225 12:08:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:13.225 12:08:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:13.225 12:08:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:13.225 12:08:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:13.225 12:08:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:13.225 12:08:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:13.225 12:08:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:13.225 12:08:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.225 12:08:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.225 12:08:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.225 12:08:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:13.225 12:08:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.225 12:08:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.225 [ 00:16:13.225 { 00:16:13.225 "name": "BaseBdev1", 00:16:13.225 "aliases": [ 00:16:13.225 "e3893d9a-ba7e-4f77-8d36-277068b0abb7" 00:16:13.225 ], 00:16:13.225 "product_name": "Malloc disk", 00:16:13.225 "block_size": 512, 00:16:13.225 "num_blocks": 65536, 00:16:13.225 "uuid": "e3893d9a-ba7e-4f77-8d36-277068b0abb7", 00:16:13.225 "assigned_rate_limits": { 00:16:13.225 "rw_ios_per_sec": 0, 00:16:13.225 "rw_mbytes_per_sec": 0, 00:16:13.225 "r_mbytes_per_sec": 0, 00:16:13.225 "w_mbytes_per_sec": 0 00:16:13.225 }, 00:16:13.225 "claimed": true, 00:16:13.225 "claim_type": "exclusive_write", 00:16:13.225 "zoned": false, 00:16:13.225 "supported_io_types": { 00:16:13.225 "read": true, 00:16:13.225 "write": true, 00:16:13.225 "unmap": true, 00:16:13.225 "flush": true, 00:16:13.225 "reset": true, 00:16:13.225 "nvme_admin": false, 00:16:13.225 "nvme_io": false, 00:16:13.225 "nvme_io_md": false, 00:16:13.225 "write_zeroes": true, 00:16:13.225 "zcopy": true, 00:16:13.225 "get_zone_info": false, 00:16:13.225 "zone_management": false, 00:16:13.225 "zone_append": false, 00:16:13.225 "compare": false, 00:16:13.225 "compare_and_write": false, 00:16:13.225 "abort": true, 00:16:13.225 "seek_hole": false, 00:16:13.225 "seek_data": false, 00:16:13.225 "copy": true, 00:16:13.225 "nvme_iov_md": false 00:16:13.225 }, 00:16:13.225 "memory_domains": [ 00:16:13.225 { 00:16:13.225 "dma_device_id": "system", 00:16:13.225 "dma_device_type": 1 00:16:13.225 }, 00:16:13.225 { 00:16:13.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:13.225 "dma_device_type": 2 00:16:13.225 } 00:16:13.225 ], 00:16:13.225 "driver_specific": {} 00:16:13.225 } 00:16:13.225 ] 00:16:13.225 12:08:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.225 12:08:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:13.225 12:08:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:13.225 12:08:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:13.225 12:08:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:13.225 12:08:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:13.225 12:08:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:13.225 12:08:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:13.225 12:08:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:13.225 12:08:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:13.225 12:08:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:13.225 12:08:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:13.225 12:08:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.225 12:08:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.225 12:08:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:13.225 12:08:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.485 12:08:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.485 12:08:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:13.485 "name": "Existed_Raid", 00:16:13.485 "uuid": "af11a487-5997-4043-8d4c-0880c22d9442", 00:16:13.485 "strip_size_kb": 64, 00:16:13.485 "state": "configuring", 00:16:13.485 "raid_level": "raid5f", 00:16:13.485 "superblock": true, 00:16:13.485 "num_base_bdevs": 4, 00:16:13.485 "num_base_bdevs_discovered": 3, 00:16:13.485 "num_base_bdevs_operational": 4, 00:16:13.485 "base_bdevs_list": [ 00:16:13.485 { 00:16:13.485 "name": "BaseBdev1", 00:16:13.485 "uuid": "e3893d9a-ba7e-4f77-8d36-277068b0abb7", 00:16:13.485 "is_configured": true, 00:16:13.485 "data_offset": 2048, 00:16:13.485 "data_size": 63488 00:16:13.485 }, 00:16:13.485 { 00:16:13.485 "name": null, 00:16:13.485 "uuid": "3f626998-a0cd-43b4-97d6-8f13dd631f37", 00:16:13.485 "is_configured": false, 00:16:13.485 "data_offset": 0, 00:16:13.485 "data_size": 63488 00:16:13.485 }, 00:16:13.485 { 00:16:13.485 "name": "BaseBdev3", 00:16:13.485 "uuid": "ed54937d-10a9-4490-b69e-208249599e76", 00:16:13.485 "is_configured": true, 00:16:13.485 "data_offset": 2048, 00:16:13.485 "data_size": 63488 00:16:13.485 }, 00:16:13.485 { 00:16:13.485 "name": "BaseBdev4", 00:16:13.485 "uuid": "e5854ac1-afe0-47c6-877d-5ca4c82b7351", 00:16:13.485 "is_configured": true, 00:16:13.485 "data_offset": 2048, 00:16:13.485 "data_size": 63488 00:16:13.485 } 00:16:13.485 ] 00:16:13.485 }' 00:16:13.485 12:08:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:13.485 12:08:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.746 12:08:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.746 12:08:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.746 12:08:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.746 12:08:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:13.746 12:08:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.746 12:08:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:13.746 12:08:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:13.746 12:08:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.746 12:08:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.746 [2024-11-19 12:08:17.032886] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:13.746 12:08:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.746 12:08:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:13.746 12:08:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:13.746 12:08:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:13.746 12:08:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:13.746 12:08:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:13.746 12:08:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:13.746 12:08:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:13.746 12:08:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:13.746 12:08:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:13.746 12:08:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:13.746 12:08:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.746 12:08:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:13.746 12:08:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.746 12:08:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.746 12:08:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.746 12:08:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:13.746 "name": "Existed_Raid", 00:16:13.746 "uuid": "af11a487-5997-4043-8d4c-0880c22d9442", 00:16:13.746 "strip_size_kb": 64, 00:16:13.746 "state": "configuring", 00:16:13.746 "raid_level": "raid5f", 00:16:13.746 "superblock": true, 00:16:13.746 "num_base_bdevs": 4, 00:16:13.746 "num_base_bdevs_discovered": 2, 00:16:13.746 "num_base_bdevs_operational": 4, 00:16:13.746 "base_bdevs_list": [ 00:16:13.746 { 00:16:13.746 "name": "BaseBdev1", 00:16:13.746 "uuid": "e3893d9a-ba7e-4f77-8d36-277068b0abb7", 00:16:13.746 "is_configured": true, 00:16:13.746 "data_offset": 2048, 00:16:13.746 "data_size": 63488 00:16:13.746 }, 00:16:13.746 { 00:16:13.746 "name": null, 00:16:13.746 "uuid": "3f626998-a0cd-43b4-97d6-8f13dd631f37", 00:16:13.746 "is_configured": false, 00:16:13.746 "data_offset": 0, 00:16:13.746 "data_size": 63488 00:16:13.746 }, 00:16:13.746 { 00:16:13.746 "name": null, 00:16:13.746 "uuid": "ed54937d-10a9-4490-b69e-208249599e76", 00:16:13.746 "is_configured": false, 00:16:13.746 "data_offset": 0, 00:16:13.746 "data_size": 63488 00:16:13.746 }, 00:16:13.746 { 00:16:13.746 "name": "BaseBdev4", 00:16:13.746 "uuid": "e5854ac1-afe0-47c6-877d-5ca4c82b7351", 00:16:13.746 "is_configured": true, 00:16:13.746 "data_offset": 2048, 00:16:13.746 "data_size": 63488 00:16:13.746 } 00:16:13.746 ] 00:16:13.746 }' 00:16:13.746 12:08:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:13.746 12:08:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.315 12:08:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.315 12:08:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:14.315 12:08:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.315 12:08:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.315 12:08:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.315 12:08:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:14.315 12:08:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:14.315 12:08:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.315 12:08:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.315 [2024-11-19 12:08:17.496091] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:14.315 12:08:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.315 12:08:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:14.315 12:08:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:14.315 12:08:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:14.315 12:08:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:14.315 12:08:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:14.315 12:08:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:14.315 12:08:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:14.315 12:08:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:14.315 12:08:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:14.315 12:08:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:14.315 12:08:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.315 12:08:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.315 12:08:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.315 12:08:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:14.315 12:08:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.315 12:08:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:14.315 "name": "Existed_Raid", 00:16:14.315 "uuid": "af11a487-5997-4043-8d4c-0880c22d9442", 00:16:14.315 "strip_size_kb": 64, 00:16:14.315 "state": "configuring", 00:16:14.315 "raid_level": "raid5f", 00:16:14.315 "superblock": true, 00:16:14.315 "num_base_bdevs": 4, 00:16:14.315 "num_base_bdevs_discovered": 3, 00:16:14.315 "num_base_bdevs_operational": 4, 00:16:14.315 "base_bdevs_list": [ 00:16:14.315 { 00:16:14.315 "name": "BaseBdev1", 00:16:14.315 "uuid": "e3893d9a-ba7e-4f77-8d36-277068b0abb7", 00:16:14.315 "is_configured": true, 00:16:14.315 "data_offset": 2048, 00:16:14.315 "data_size": 63488 00:16:14.315 }, 00:16:14.315 { 00:16:14.315 "name": null, 00:16:14.315 "uuid": "3f626998-a0cd-43b4-97d6-8f13dd631f37", 00:16:14.315 "is_configured": false, 00:16:14.315 "data_offset": 0, 00:16:14.315 "data_size": 63488 00:16:14.315 }, 00:16:14.315 { 00:16:14.315 "name": "BaseBdev3", 00:16:14.315 "uuid": "ed54937d-10a9-4490-b69e-208249599e76", 00:16:14.315 "is_configured": true, 00:16:14.315 "data_offset": 2048, 00:16:14.315 "data_size": 63488 00:16:14.315 }, 00:16:14.315 { 00:16:14.315 "name": "BaseBdev4", 00:16:14.315 "uuid": "e5854ac1-afe0-47c6-877d-5ca4c82b7351", 00:16:14.315 "is_configured": true, 00:16:14.315 "data_offset": 2048, 00:16:14.315 "data_size": 63488 00:16:14.315 } 00:16:14.315 ] 00:16:14.315 }' 00:16:14.315 12:08:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:14.315 12:08:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.575 12:08:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.575 12:08:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:14.575 12:08:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.575 12:08:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.575 12:08:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.575 12:08:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:14.575 12:08:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:14.575 12:08:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.575 12:08:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.575 [2024-11-19 12:08:17.923366] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:14.835 12:08:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.835 12:08:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:14.835 12:08:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:14.835 12:08:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:14.835 12:08:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:14.835 12:08:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:14.835 12:08:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:14.835 12:08:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:14.835 12:08:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:14.836 12:08:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:14.836 12:08:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:14.836 12:08:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.836 12:08:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:14.836 12:08:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.836 12:08:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.836 12:08:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.836 12:08:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:14.836 "name": "Existed_Raid", 00:16:14.836 "uuid": "af11a487-5997-4043-8d4c-0880c22d9442", 00:16:14.836 "strip_size_kb": 64, 00:16:14.836 "state": "configuring", 00:16:14.836 "raid_level": "raid5f", 00:16:14.836 "superblock": true, 00:16:14.836 "num_base_bdevs": 4, 00:16:14.836 "num_base_bdevs_discovered": 2, 00:16:14.836 "num_base_bdevs_operational": 4, 00:16:14.836 "base_bdevs_list": [ 00:16:14.836 { 00:16:14.836 "name": null, 00:16:14.836 "uuid": "e3893d9a-ba7e-4f77-8d36-277068b0abb7", 00:16:14.836 "is_configured": false, 00:16:14.836 "data_offset": 0, 00:16:14.836 "data_size": 63488 00:16:14.836 }, 00:16:14.836 { 00:16:14.836 "name": null, 00:16:14.836 "uuid": "3f626998-a0cd-43b4-97d6-8f13dd631f37", 00:16:14.836 "is_configured": false, 00:16:14.836 "data_offset": 0, 00:16:14.836 "data_size": 63488 00:16:14.836 }, 00:16:14.836 { 00:16:14.836 "name": "BaseBdev3", 00:16:14.836 "uuid": "ed54937d-10a9-4490-b69e-208249599e76", 00:16:14.836 "is_configured": true, 00:16:14.836 "data_offset": 2048, 00:16:14.836 "data_size": 63488 00:16:14.836 }, 00:16:14.836 { 00:16:14.836 "name": "BaseBdev4", 00:16:14.836 "uuid": "e5854ac1-afe0-47c6-877d-5ca4c82b7351", 00:16:14.836 "is_configured": true, 00:16:14.836 "data_offset": 2048, 00:16:14.836 "data_size": 63488 00:16:14.836 } 00:16:14.836 ] 00:16:14.836 }' 00:16:14.836 12:08:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:14.836 12:08:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.095 12:08:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.095 12:08:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:15.095 12:08:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.095 12:08:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.095 12:08:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.095 12:08:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:15.095 12:08:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:15.095 12:08:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.095 12:08:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.095 [2024-11-19 12:08:18.468831] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:15.355 12:08:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.355 12:08:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:15.355 12:08:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:15.355 12:08:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:15.355 12:08:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:15.355 12:08:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:15.355 12:08:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:15.355 12:08:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:15.355 12:08:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:15.355 12:08:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:15.355 12:08:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:15.355 12:08:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:15.355 12:08:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.355 12:08:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.355 12:08:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.355 12:08:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.355 12:08:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:15.355 "name": "Existed_Raid", 00:16:15.355 "uuid": "af11a487-5997-4043-8d4c-0880c22d9442", 00:16:15.355 "strip_size_kb": 64, 00:16:15.355 "state": "configuring", 00:16:15.355 "raid_level": "raid5f", 00:16:15.355 "superblock": true, 00:16:15.355 "num_base_bdevs": 4, 00:16:15.355 "num_base_bdevs_discovered": 3, 00:16:15.355 "num_base_bdevs_operational": 4, 00:16:15.355 "base_bdevs_list": [ 00:16:15.355 { 00:16:15.355 "name": null, 00:16:15.355 "uuid": "e3893d9a-ba7e-4f77-8d36-277068b0abb7", 00:16:15.355 "is_configured": false, 00:16:15.355 "data_offset": 0, 00:16:15.355 "data_size": 63488 00:16:15.355 }, 00:16:15.355 { 00:16:15.355 "name": "BaseBdev2", 00:16:15.355 "uuid": "3f626998-a0cd-43b4-97d6-8f13dd631f37", 00:16:15.355 "is_configured": true, 00:16:15.355 "data_offset": 2048, 00:16:15.355 "data_size": 63488 00:16:15.355 }, 00:16:15.355 { 00:16:15.355 "name": "BaseBdev3", 00:16:15.355 "uuid": "ed54937d-10a9-4490-b69e-208249599e76", 00:16:15.355 "is_configured": true, 00:16:15.355 "data_offset": 2048, 00:16:15.355 "data_size": 63488 00:16:15.355 }, 00:16:15.355 { 00:16:15.355 "name": "BaseBdev4", 00:16:15.355 "uuid": "e5854ac1-afe0-47c6-877d-5ca4c82b7351", 00:16:15.355 "is_configured": true, 00:16:15.355 "data_offset": 2048, 00:16:15.355 "data_size": 63488 00:16:15.355 } 00:16:15.355 ] 00:16:15.355 }' 00:16:15.355 12:08:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:15.355 12:08:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.615 12:08:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.615 12:08:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.615 12:08:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.615 12:08:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:15.615 12:08:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.615 12:08:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:15.615 12:08:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.615 12:08:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.615 12:08:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.615 12:08:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:15.875 12:08:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.876 12:08:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e3893d9a-ba7e-4f77-8d36-277068b0abb7 00:16:15.876 12:08:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.876 12:08:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.876 [2024-11-19 12:08:19.066900] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:15.876 [2024-11-19 12:08:19.067242] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:15.876 [2024-11-19 12:08:19.067260] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:15.876 [2024-11-19 12:08:19.067521] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:15.876 NewBaseBdev 00:16:15.876 12:08:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.876 12:08:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:15.876 12:08:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:15.876 12:08:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:15.876 12:08:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:15.876 12:08:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:15.876 12:08:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:15.876 12:08:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:15.876 12:08:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.876 12:08:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.876 [2024-11-19 12:08:19.074801] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:15.876 [2024-11-19 12:08:19.074865] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:15.876 [2024-11-19 12:08:19.075138] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:15.876 12:08:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.876 12:08:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:15.876 12:08:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.876 12:08:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.876 [ 00:16:15.876 { 00:16:15.876 "name": "NewBaseBdev", 00:16:15.876 "aliases": [ 00:16:15.876 "e3893d9a-ba7e-4f77-8d36-277068b0abb7" 00:16:15.876 ], 00:16:15.876 "product_name": "Malloc disk", 00:16:15.876 "block_size": 512, 00:16:15.876 "num_blocks": 65536, 00:16:15.876 "uuid": "e3893d9a-ba7e-4f77-8d36-277068b0abb7", 00:16:15.876 "assigned_rate_limits": { 00:16:15.876 "rw_ios_per_sec": 0, 00:16:15.876 "rw_mbytes_per_sec": 0, 00:16:15.876 "r_mbytes_per_sec": 0, 00:16:15.876 "w_mbytes_per_sec": 0 00:16:15.876 }, 00:16:15.876 "claimed": true, 00:16:15.876 "claim_type": "exclusive_write", 00:16:15.876 "zoned": false, 00:16:15.876 "supported_io_types": { 00:16:15.876 "read": true, 00:16:15.876 "write": true, 00:16:15.876 "unmap": true, 00:16:15.876 "flush": true, 00:16:15.876 "reset": true, 00:16:15.876 "nvme_admin": false, 00:16:15.876 "nvme_io": false, 00:16:15.876 "nvme_io_md": false, 00:16:15.876 "write_zeroes": true, 00:16:15.876 "zcopy": true, 00:16:15.876 "get_zone_info": false, 00:16:15.876 "zone_management": false, 00:16:15.876 "zone_append": false, 00:16:15.876 "compare": false, 00:16:15.876 "compare_and_write": false, 00:16:15.876 "abort": true, 00:16:15.876 "seek_hole": false, 00:16:15.876 "seek_data": false, 00:16:15.876 "copy": true, 00:16:15.876 "nvme_iov_md": false 00:16:15.876 }, 00:16:15.876 "memory_domains": [ 00:16:15.876 { 00:16:15.876 "dma_device_id": "system", 00:16:15.876 "dma_device_type": 1 00:16:15.876 }, 00:16:15.876 { 00:16:15.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:15.876 "dma_device_type": 2 00:16:15.876 } 00:16:15.876 ], 00:16:15.876 "driver_specific": {} 00:16:15.876 } 00:16:15.876 ] 00:16:15.876 12:08:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.876 12:08:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:15.876 12:08:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:15.876 12:08:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:15.876 12:08:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:15.876 12:08:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:15.876 12:08:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:15.876 12:08:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:15.876 12:08:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:15.876 12:08:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:15.876 12:08:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:15.876 12:08:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:15.876 12:08:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.876 12:08:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:15.876 12:08:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.876 12:08:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.876 12:08:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.876 12:08:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:15.876 "name": "Existed_Raid", 00:16:15.876 "uuid": "af11a487-5997-4043-8d4c-0880c22d9442", 00:16:15.876 "strip_size_kb": 64, 00:16:15.876 "state": "online", 00:16:15.876 "raid_level": "raid5f", 00:16:15.876 "superblock": true, 00:16:15.876 "num_base_bdevs": 4, 00:16:15.876 "num_base_bdevs_discovered": 4, 00:16:15.876 "num_base_bdevs_operational": 4, 00:16:15.876 "base_bdevs_list": [ 00:16:15.876 { 00:16:15.876 "name": "NewBaseBdev", 00:16:15.876 "uuid": "e3893d9a-ba7e-4f77-8d36-277068b0abb7", 00:16:15.876 "is_configured": true, 00:16:15.876 "data_offset": 2048, 00:16:15.876 "data_size": 63488 00:16:15.876 }, 00:16:15.876 { 00:16:15.876 "name": "BaseBdev2", 00:16:15.876 "uuid": "3f626998-a0cd-43b4-97d6-8f13dd631f37", 00:16:15.876 "is_configured": true, 00:16:15.876 "data_offset": 2048, 00:16:15.876 "data_size": 63488 00:16:15.876 }, 00:16:15.876 { 00:16:15.876 "name": "BaseBdev3", 00:16:15.876 "uuid": "ed54937d-10a9-4490-b69e-208249599e76", 00:16:15.876 "is_configured": true, 00:16:15.876 "data_offset": 2048, 00:16:15.876 "data_size": 63488 00:16:15.876 }, 00:16:15.876 { 00:16:15.876 "name": "BaseBdev4", 00:16:15.876 "uuid": "e5854ac1-afe0-47c6-877d-5ca4c82b7351", 00:16:15.876 "is_configured": true, 00:16:15.876 "data_offset": 2048, 00:16:15.876 "data_size": 63488 00:16:15.876 } 00:16:15.876 ] 00:16:15.876 }' 00:16:15.876 12:08:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:15.876 12:08:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.444 12:08:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:16.444 12:08:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:16.444 12:08:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:16.444 12:08:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:16.444 12:08:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:16.444 12:08:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:16.444 12:08:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:16.444 12:08:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:16.444 12:08:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.444 12:08:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.444 [2024-11-19 12:08:19.543067] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:16.444 12:08:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.444 12:08:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:16.444 "name": "Existed_Raid", 00:16:16.444 "aliases": [ 00:16:16.444 "af11a487-5997-4043-8d4c-0880c22d9442" 00:16:16.444 ], 00:16:16.444 "product_name": "Raid Volume", 00:16:16.444 "block_size": 512, 00:16:16.444 "num_blocks": 190464, 00:16:16.444 "uuid": "af11a487-5997-4043-8d4c-0880c22d9442", 00:16:16.444 "assigned_rate_limits": { 00:16:16.444 "rw_ios_per_sec": 0, 00:16:16.444 "rw_mbytes_per_sec": 0, 00:16:16.444 "r_mbytes_per_sec": 0, 00:16:16.444 "w_mbytes_per_sec": 0 00:16:16.444 }, 00:16:16.444 "claimed": false, 00:16:16.444 "zoned": false, 00:16:16.444 "supported_io_types": { 00:16:16.444 "read": true, 00:16:16.444 "write": true, 00:16:16.444 "unmap": false, 00:16:16.444 "flush": false, 00:16:16.444 "reset": true, 00:16:16.444 "nvme_admin": false, 00:16:16.444 "nvme_io": false, 00:16:16.444 "nvme_io_md": false, 00:16:16.444 "write_zeroes": true, 00:16:16.444 "zcopy": false, 00:16:16.444 "get_zone_info": false, 00:16:16.444 "zone_management": false, 00:16:16.444 "zone_append": false, 00:16:16.444 "compare": false, 00:16:16.444 "compare_and_write": false, 00:16:16.444 "abort": false, 00:16:16.444 "seek_hole": false, 00:16:16.444 "seek_data": false, 00:16:16.444 "copy": false, 00:16:16.444 "nvme_iov_md": false 00:16:16.444 }, 00:16:16.444 "driver_specific": { 00:16:16.444 "raid": { 00:16:16.444 "uuid": "af11a487-5997-4043-8d4c-0880c22d9442", 00:16:16.444 "strip_size_kb": 64, 00:16:16.444 "state": "online", 00:16:16.444 "raid_level": "raid5f", 00:16:16.444 "superblock": true, 00:16:16.444 "num_base_bdevs": 4, 00:16:16.444 "num_base_bdevs_discovered": 4, 00:16:16.444 "num_base_bdevs_operational": 4, 00:16:16.444 "base_bdevs_list": [ 00:16:16.444 { 00:16:16.444 "name": "NewBaseBdev", 00:16:16.444 "uuid": "e3893d9a-ba7e-4f77-8d36-277068b0abb7", 00:16:16.444 "is_configured": true, 00:16:16.444 "data_offset": 2048, 00:16:16.444 "data_size": 63488 00:16:16.444 }, 00:16:16.444 { 00:16:16.444 "name": "BaseBdev2", 00:16:16.444 "uuid": "3f626998-a0cd-43b4-97d6-8f13dd631f37", 00:16:16.444 "is_configured": true, 00:16:16.444 "data_offset": 2048, 00:16:16.444 "data_size": 63488 00:16:16.444 }, 00:16:16.444 { 00:16:16.444 "name": "BaseBdev3", 00:16:16.444 "uuid": "ed54937d-10a9-4490-b69e-208249599e76", 00:16:16.444 "is_configured": true, 00:16:16.444 "data_offset": 2048, 00:16:16.444 "data_size": 63488 00:16:16.444 }, 00:16:16.444 { 00:16:16.444 "name": "BaseBdev4", 00:16:16.444 "uuid": "e5854ac1-afe0-47c6-877d-5ca4c82b7351", 00:16:16.444 "is_configured": true, 00:16:16.444 "data_offset": 2048, 00:16:16.444 "data_size": 63488 00:16:16.444 } 00:16:16.444 ] 00:16:16.444 } 00:16:16.444 } 00:16:16.444 }' 00:16:16.444 12:08:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:16.444 12:08:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:16.444 BaseBdev2 00:16:16.444 BaseBdev3 00:16:16.444 BaseBdev4' 00:16:16.444 12:08:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:16.444 12:08:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:16.444 12:08:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:16.444 12:08:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:16.444 12:08:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:16.444 12:08:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.444 12:08:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.444 12:08:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.444 12:08:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:16.444 12:08:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:16.444 12:08:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:16.444 12:08:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:16.444 12:08:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:16.444 12:08:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.444 12:08:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.444 12:08:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.444 12:08:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:16.444 12:08:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:16.444 12:08:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:16.444 12:08:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:16.444 12:08:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.444 12:08:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.444 12:08:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:16.444 12:08:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.444 12:08:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:16.444 12:08:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:16.444 12:08:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:16.444 12:08:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:16.444 12:08:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:16.444 12:08:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.444 12:08:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.444 12:08:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.704 12:08:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:16.704 12:08:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:16.704 12:08:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:16.704 12:08:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.704 12:08:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.704 [2024-11-19 12:08:19.846260] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:16.704 [2024-11-19 12:08:19.846330] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:16.704 [2024-11-19 12:08:19.846412] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:16.704 [2024-11-19 12:08:19.846700] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:16.704 [2024-11-19 12:08:19.846711] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:16.704 12:08:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.704 12:08:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83362 00:16:16.704 12:08:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 83362 ']' 00:16:16.704 12:08:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 83362 00:16:16.704 12:08:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:16.704 12:08:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:16.704 12:08:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83362 00:16:16.704 12:08:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:16.704 12:08:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:16.704 killing process with pid 83362 00:16:16.704 12:08:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83362' 00:16:16.704 12:08:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 83362 00:16:16.704 [2024-11-19 12:08:19.893396] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:16.704 12:08:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 83362 00:16:16.964 [2024-11-19 12:08:20.265976] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:18.376 12:08:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:16:18.376 00:16:18.376 real 0m11.057s 00:16:18.376 user 0m17.593s 00:16:18.376 sys 0m1.989s 00:16:18.376 12:08:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:18.376 12:08:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.376 ************************************ 00:16:18.376 END TEST raid5f_state_function_test_sb 00:16:18.376 ************************************ 00:16:18.376 12:08:21 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:16:18.376 12:08:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:18.376 12:08:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:18.376 12:08:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:18.376 ************************************ 00:16:18.376 START TEST raid5f_superblock_test 00:16:18.376 ************************************ 00:16:18.376 12:08:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:16:18.376 12:08:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:16:18.376 12:08:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:16:18.376 12:08:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:18.376 12:08:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:18.376 12:08:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:18.376 12:08:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:18.376 12:08:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:18.376 12:08:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:18.377 12:08:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:18.377 12:08:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:18.377 12:08:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:18.377 12:08:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:18.377 12:08:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:18.377 12:08:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:16:18.377 12:08:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:16:18.377 12:08:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:16:18.377 12:08:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84033 00:16:18.377 12:08:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:18.377 12:08:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84033 00:16:18.377 12:08:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 84033 ']' 00:16:18.377 12:08:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:18.377 12:08:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:18.377 12:08:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:18.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:18.377 12:08:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:18.377 12:08:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.377 [2024-11-19 12:08:21.489587] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:16:18.377 [2024-11-19 12:08:21.489704] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84033 ] 00:16:18.377 [2024-11-19 12:08:21.646528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:18.636 [2024-11-19 12:08:21.760939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:18.636 [2024-11-19 12:08:21.949558] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:18.636 [2024-11-19 12:08:21.949672] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:19.205 12:08:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:19.206 12:08:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:16:19.206 12:08:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:19.206 12:08:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:19.206 12:08:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:19.206 12:08:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:19.206 12:08:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:19.206 12:08:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:19.206 12:08:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:19.206 12:08:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:19.206 12:08:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:16:19.206 12:08:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.206 12:08:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.206 malloc1 00:16:19.206 12:08:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.206 12:08:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:19.206 12:08:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.206 12:08:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.206 [2024-11-19 12:08:22.350158] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:19.206 [2024-11-19 12:08:22.350277] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:19.206 [2024-11-19 12:08:22.350325] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:19.206 [2024-11-19 12:08:22.350383] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:19.206 [2024-11-19 12:08:22.352393] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:19.206 [2024-11-19 12:08:22.352472] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:19.206 pt1 00:16:19.206 12:08:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.206 12:08:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:19.206 12:08:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:19.206 12:08:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:19.206 12:08:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:19.206 12:08:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:19.206 12:08:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:19.206 12:08:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:19.206 12:08:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:19.206 12:08:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:16:19.206 12:08:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.206 12:08:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.206 malloc2 00:16:19.206 12:08:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.206 12:08:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:19.206 12:08:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.206 12:08:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.206 [2024-11-19 12:08:22.408037] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:19.206 [2024-11-19 12:08:22.408085] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:19.206 [2024-11-19 12:08:22.408105] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:19.206 [2024-11-19 12:08:22.408114] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:19.206 [2024-11-19 12:08:22.410071] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:19.206 [2024-11-19 12:08:22.410106] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:19.206 pt2 00:16:19.206 12:08:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.206 12:08:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:19.206 12:08:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:19.206 12:08:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:16:19.206 12:08:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:16:19.206 12:08:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:19.206 12:08:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:19.206 12:08:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:19.206 12:08:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:19.206 12:08:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:16:19.206 12:08:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.206 12:08:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.206 malloc3 00:16:19.206 12:08:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.206 12:08:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:19.206 12:08:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.206 12:08:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.206 [2024-11-19 12:08:22.475479] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:19.206 [2024-11-19 12:08:22.475570] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:19.206 [2024-11-19 12:08:22.475610] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:19.206 [2024-11-19 12:08:22.475639] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:19.206 [2024-11-19 12:08:22.477684] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:19.206 [2024-11-19 12:08:22.477754] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:19.206 pt3 00:16:19.206 12:08:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.206 12:08:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:19.206 12:08:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:19.206 12:08:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:16:19.206 12:08:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:16:19.206 12:08:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:16:19.206 12:08:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:19.206 12:08:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:19.206 12:08:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:19.206 12:08:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:16:19.206 12:08:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.206 12:08:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.206 malloc4 00:16:19.206 12:08:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.206 12:08:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:19.206 12:08:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.206 12:08:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.206 [2024-11-19 12:08:22.529242] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:19.206 [2024-11-19 12:08:22.529326] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:19.206 [2024-11-19 12:08:22.529376] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:19.206 [2024-11-19 12:08:22.529403] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:19.206 [2024-11-19 12:08:22.531394] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:19.206 [2024-11-19 12:08:22.531460] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:19.206 pt4 00:16:19.206 12:08:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.206 12:08:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:19.206 12:08:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:19.206 12:08:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:16:19.206 12:08:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.206 12:08:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.206 [2024-11-19 12:08:22.541259] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:19.206 [2024-11-19 12:08:22.543023] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:19.206 [2024-11-19 12:08:22.543121] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:19.206 [2024-11-19 12:08:22.543227] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:19.206 [2024-11-19 12:08:22.543472] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:19.206 [2024-11-19 12:08:22.543522] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:19.206 [2024-11-19 12:08:22.543780] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:19.206 [2024-11-19 12:08:22.551374] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:19.206 [2024-11-19 12:08:22.551429] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:19.206 [2024-11-19 12:08:22.551650] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:19.206 12:08:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.206 12:08:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:19.206 12:08:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:19.207 12:08:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:19.207 12:08:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:19.207 12:08:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:19.207 12:08:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:19.207 12:08:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.207 12:08:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.207 12:08:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.207 12:08:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.207 12:08:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.207 12:08:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.207 12:08:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.207 12:08:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.466 12:08:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.466 12:08:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.466 "name": "raid_bdev1", 00:16:19.466 "uuid": "5a7fcb9a-d9ba-4704-8d90-ea8366d06811", 00:16:19.466 "strip_size_kb": 64, 00:16:19.466 "state": "online", 00:16:19.466 "raid_level": "raid5f", 00:16:19.466 "superblock": true, 00:16:19.466 "num_base_bdevs": 4, 00:16:19.466 "num_base_bdevs_discovered": 4, 00:16:19.466 "num_base_bdevs_operational": 4, 00:16:19.466 "base_bdevs_list": [ 00:16:19.466 { 00:16:19.466 "name": "pt1", 00:16:19.466 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:19.466 "is_configured": true, 00:16:19.466 "data_offset": 2048, 00:16:19.466 "data_size": 63488 00:16:19.466 }, 00:16:19.466 { 00:16:19.466 "name": "pt2", 00:16:19.466 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:19.466 "is_configured": true, 00:16:19.466 "data_offset": 2048, 00:16:19.466 "data_size": 63488 00:16:19.466 }, 00:16:19.466 { 00:16:19.466 "name": "pt3", 00:16:19.466 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:19.466 "is_configured": true, 00:16:19.466 "data_offset": 2048, 00:16:19.466 "data_size": 63488 00:16:19.466 }, 00:16:19.466 { 00:16:19.466 "name": "pt4", 00:16:19.466 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:19.466 "is_configured": true, 00:16:19.466 "data_offset": 2048, 00:16:19.466 "data_size": 63488 00:16:19.466 } 00:16:19.466 ] 00:16:19.466 }' 00:16:19.466 12:08:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.466 12:08:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.726 12:08:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:19.726 12:08:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:19.726 12:08:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:19.726 12:08:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:19.726 12:08:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:19.726 12:08:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:19.726 12:08:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:19.726 12:08:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:19.726 12:08:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.726 12:08:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.726 [2024-11-19 12:08:23.031213] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:19.726 12:08:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.726 12:08:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:19.726 "name": "raid_bdev1", 00:16:19.726 "aliases": [ 00:16:19.726 "5a7fcb9a-d9ba-4704-8d90-ea8366d06811" 00:16:19.726 ], 00:16:19.726 "product_name": "Raid Volume", 00:16:19.726 "block_size": 512, 00:16:19.726 "num_blocks": 190464, 00:16:19.726 "uuid": "5a7fcb9a-d9ba-4704-8d90-ea8366d06811", 00:16:19.726 "assigned_rate_limits": { 00:16:19.726 "rw_ios_per_sec": 0, 00:16:19.726 "rw_mbytes_per_sec": 0, 00:16:19.726 "r_mbytes_per_sec": 0, 00:16:19.726 "w_mbytes_per_sec": 0 00:16:19.726 }, 00:16:19.726 "claimed": false, 00:16:19.726 "zoned": false, 00:16:19.726 "supported_io_types": { 00:16:19.726 "read": true, 00:16:19.726 "write": true, 00:16:19.726 "unmap": false, 00:16:19.726 "flush": false, 00:16:19.726 "reset": true, 00:16:19.726 "nvme_admin": false, 00:16:19.726 "nvme_io": false, 00:16:19.726 "nvme_io_md": false, 00:16:19.726 "write_zeroes": true, 00:16:19.726 "zcopy": false, 00:16:19.726 "get_zone_info": false, 00:16:19.726 "zone_management": false, 00:16:19.726 "zone_append": false, 00:16:19.726 "compare": false, 00:16:19.726 "compare_and_write": false, 00:16:19.726 "abort": false, 00:16:19.726 "seek_hole": false, 00:16:19.726 "seek_data": false, 00:16:19.726 "copy": false, 00:16:19.726 "nvme_iov_md": false 00:16:19.726 }, 00:16:19.726 "driver_specific": { 00:16:19.726 "raid": { 00:16:19.726 "uuid": "5a7fcb9a-d9ba-4704-8d90-ea8366d06811", 00:16:19.726 "strip_size_kb": 64, 00:16:19.726 "state": "online", 00:16:19.726 "raid_level": "raid5f", 00:16:19.726 "superblock": true, 00:16:19.726 "num_base_bdevs": 4, 00:16:19.726 "num_base_bdevs_discovered": 4, 00:16:19.726 "num_base_bdevs_operational": 4, 00:16:19.726 "base_bdevs_list": [ 00:16:19.726 { 00:16:19.726 "name": "pt1", 00:16:19.726 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:19.726 "is_configured": true, 00:16:19.726 "data_offset": 2048, 00:16:19.726 "data_size": 63488 00:16:19.726 }, 00:16:19.726 { 00:16:19.726 "name": "pt2", 00:16:19.726 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:19.726 "is_configured": true, 00:16:19.726 "data_offset": 2048, 00:16:19.726 "data_size": 63488 00:16:19.726 }, 00:16:19.726 { 00:16:19.726 "name": "pt3", 00:16:19.726 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:19.726 "is_configured": true, 00:16:19.726 "data_offset": 2048, 00:16:19.726 "data_size": 63488 00:16:19.726 }, 00:16:19.726 { 00:16:19.726 "name": "pt4", 00:16:19.726 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:19.726 "is_configured": true, 00:16:19.726 "data_offset": 2048, 00:16:19.726 "data_size": 63488 00:16:19.726 } 00:16:19.726 ] 00:16:19.726 } 00:16:19.726 } 00:16:19.726 }' 00:16:19.726 12:08:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:19.986 12:08:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:19.986 pt2 00:16:19.986 pt3 00:16:19.986 pt4' 00:16:19.986 12:08:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:19.986 12:08:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:19.986 12:08:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:19.986 12:08:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:19.986 12:08:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:19.986 12:08:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.986 12:08:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.986 12:08:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.986 12:08:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:19.986 12:08:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:19.986 12:08:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:19.986 12:08:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:19.986 12:08:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:19.986 12:08:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.986 12:08:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.986 12:08:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.986 12:08:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:19.986 12:08:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:19.986 12:08:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:19.986 12:08:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:19.986 12:08:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:19.986 12:08:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.986 12:08:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.986 12:08:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.986 12:08:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:19.986 12:08:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:19.986 12:08:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:19.986 12:08:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:19.986 12:08:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:19.986 12:08:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.986 12:08:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.986 12:08:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.986 12:08:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:19.986 12:08:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:19.986 12:08:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:19.986 12:08:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.986 12:08:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.986 12:08:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:19.986 [2024-11-19 12:08:23.314627] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:19.986 12:08:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.986 12:08:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=5a7fcb9a-d9ba-4704-8d90-ea8366d06811 00:16:19.986 12:08:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 5a7fcb9a-d9ba-4704-8d90-ea8366d06811 ']' 00:16:19.986 12:08:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:19.986 12:08:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.986 12:08:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.246 [2024-11-19 12:08:23.362372] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:20.246 [2024-11-19 12:08:23.362396] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:20.246 [2024-11-19 12:08:23.362467] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:20.246 [2024-11-19 12:08:23.362547] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:20.246 [2024-11-19 12:08:23.362561] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:20.246 12:08:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.246 12:08:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.246 12:08:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.246 12:08:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:20.246 12:08:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.246 12:08:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.246 12:08:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:20.246 12:08:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:20.246 12:08:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:20.246 12:08:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:20.246 12:08:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.246 12:08:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.246 12:08:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.246 12:08:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:20.246 12:08:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:20.246 12:08:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.246 12:08:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.246 12:08:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.246 12:08:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:20.246 12:08:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:16:20.246 12:08:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.246 12:08:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.246 12:08:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.246 12:08:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:20.246 12:08:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:16:20.246 12:08:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.246 12:08:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.246 12:08:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.246 12:08:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:20.246 12:08:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.246 12:08:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.246 12:08:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:20.246 12:08:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.246 12:08:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:20.246 12:08:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:20.246 12:08:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:16:20.246 12:08:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:20.246 12:08:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:20.246 12:08:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:20.246 12:08:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:20.246 12:08:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:20.246 12:08:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:20.246 12:08:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.246 12:08:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.246 [2024-11-19 12:08:23.526139] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:20.246 [2024-11-19 12:08:23.527895] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:20.246 [2024-11-19 12:08:23.527944] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:20.246 [2024-11-19 12:08:23.527975] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:16:20.246 [2024-11-19 12:08:23.528039] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:20.246 [2024-11-19 12:08:23.528083] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:20.246 [2024-11-19 12:08:23.528102] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:20.246 [2024-11-19 12:08:23.528120] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:16:20.246 [2024-11-19 12:08:23.528132] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:20.246 [2024-11-19 12:08:23.528143] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:20.246 request: 00:16:20.246 { 00:16:20.246 "name": "raid_bdev1", 00:16:20.246 "raid_level": "raid5f", 00:16:20.246 "base_bdevs": [ 00:16:20.246 "malloc1", 00:16:20.246 "malloc2", 00:16:20.246 "malloc3", 00:16:20.246 "malloc4" 00:16:20.246 ], 00:16:20.246 "strip_size_kb": 64, 00:16:20.246 "superblock": false, 00:16:20.246 "method": "bdev_raid_create", 00:16:20.246 "req_id": 1 00:16:20.246 } 00:16:20.246 Got JSON-RPC error response 00:16:20.246 response: 00:16:20.246 { 00:16:20.246 "code": -17, 00:16:20.246 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:20.246 } 00:16:20.246 12:08:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:20.246 12:08:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:16:20.246 12:08:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:20.246 12:08:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:20.246 12:08:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:20.246 12:08:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.246 12:08:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.246 12:08:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.246 12:08:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:20.246 12:08:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.246 12:08:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:20.246 12:08:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:20.246 12:08:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:20.246 12:08:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.246 12:08:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.246 [2024-11-19 12:08:23.586039] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:20.246 [2024-11-19 12:08:23.586121] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:20.246 [2024-11-19 12:08:23.586140] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:20.246 [2024-11-19 12:08:23.586150] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:20.246 [2024-11-19 12:08:23.588210] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:20.246 [2024-11-19 12:08:23.588251] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:20.246 [2024-11-19 12:08:23.588329] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:20.246 [2024-11-19 12:08:23.588385] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:20.246 pt1 00:16:20.246 12:08:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.246 12:08:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:16:20.246 12:08:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:20.246 12:08:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:20.246 12:08:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:20.246 12:08:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:20.246 12:08:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:20.246 12:08:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.246 12:08:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.246 12:08:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.246 12:08:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.246 12:08:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.246 12:08:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.247 12:08:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.247 12:08:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.247 12:08:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.506 12:08:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.506 "name": "raid_bdev1", 00:16:20.506 "uuid": "5a7fcb9a-d9ba-4704-8d90-ea8366d06811", 00:16:20.506 "strip_size_kb": 64, 00:16:20.506 "state": "configuring", 00:16:20.506 "raid_level": "raid5f", 00:16:20.506 "superblock": true, 00:16:20.506 "num_base_bdevs": 4, 00:16:20.506 "num_base_bdevs_discovered": 1, 00:16:20.506 "num_base_bdevs_operational": 4, 00:16:20.506 "base_bdevs_list": [ 00:16:20.506 { 00:16:20.506 "name": "pt1", 00:16:20.506 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:20.506 "is_configured": true, 00:16:20.506 "data_offset": 2048, 00:16:20.506 "data_size": 63488 00:16:20.506 }, 00:16:20.506 { 00:16:20.506 "name": null, 00:16:20.506 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:20.506 "is_configured": false, 00:16:20.506 "data_offset": 2048, 00:16:20.506 "data_size": 63488 00:16:20.506 }, 00:16:20.506 { 00:16:20.506 "name": null, 00:16:20.506 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:20.506 "is_configured": false, 00:16:20.506 "data_offset": 2048, 00:16:20.506 "data_size": 63488 00:16:20.506 }, 00:16:20.506 { 00:16:20.506 "name": null, 00:16:20.506 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:20.506 "is_configured": false, 00:16:20.506 "data_offset": 2048, 00:16:20.506 "data_size": 63488 00:16:20.506 } 00:16:20.506 ] 00:16:20.506 }' 00:16:20.506 12:08:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.506 12:08:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.765 12:08:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:16:20.765 12:08:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:20.765 12:08:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.765 12:08:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.765 [2024-11-19 12:08:24.017320] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:20.765 [2024-11-19 12:08:24.017451] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:20.765 [2024-11-19 12:08:24.017489] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:20.765 [2024-11-19 12:08:24.017519] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:20.765 [2024-11-19 12:08:24.017983] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:20.765 [2024-11-19 12:08:24.018055] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:20.765 [2024-11-19 12:08:24.018171] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:20.765 [2024-11-19 12:08:24.018228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:20.765 pt2 00:16:20.765 12:08:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.765 12:08:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:16:20.765 12:08:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.765 12:08:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.765 [2024-11-19 12:08:24.029297] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:20.765 12:08:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.765 12:08:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:16:20.765 12:08:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:20.765 12:08:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:20.765 12:08:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:20.765 12:08:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:20.765 12:08:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:20.765 12:08:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.765 12:08:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.765 12:08:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.765 12:08:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.765 12:08:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.765 12:08:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.765 12:08:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.765 12:08:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.765 12:08:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.765 12:08:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.765 "name": "raid_bdev1", 00:16:20.766 "uuid": "5a7fcb9a-d9ba-4704-8d90-ea8366d06811", 00:16:20.766 "strip_size_kb": 64, 00:16:20.766 "state": "configuring", 00:16:20.766 "raid_level": "raid5f", 00:16:20.766 "superblock": true, 00:16:20.766 "num_base_bdevs": 4, 00:16:20.766 "num_base_bdevs_discovered": 1, 00:16:20.766 "num_base_bdevs_operational": 4, 00:16:20.766 "base_bdevs_list": [ 00:16:20.766 { 00:16:20.766 "name": "pt1", 00:16:20.766 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:20.766 "is_configured": true, 00:16:20.766 "data_offset": 2048, 00:16:20.766 "data_size": 63488 00:16:20.766 }, 00:16:20.766 { 00:16:20.766 "name": null, 00:16:20.766 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:20.766 "is_configured": false, 00:16:20.766 "data_offset": 0, 00:16:20.766 "data_size": 63488 00:16:20.766 }, 00:16:20.766 { 00:16:20.766 "name": null, 00:16:20.766 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:20.766 "is_configured": false, 00:16:20.766 "data_offset": 2048, 00:16:20.766 "data_size": 63488 00:16:20.766 }, 00:16:20.766 { 00:16:20.766 "name": null, 00:16:20.766 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:20.766 "is_configured": false, 00:16:20.766 "data_offset": 2048, 00:16:20.766 "data_size": 63488 00:16:20.766 } 00:16:20.766 ] 00:16:20.766 }' 00:16:20.766 12:08:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.766 12:08:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.335 12:08:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:21.335 12:08:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:21.335 12:08:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:21.335 12:08:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.335 12:08:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.335 [2024-11-19 12:08:24.452540] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:21.335 [2024-11-19 12:08:24.452593] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:21.335 [2024-11-19 12:08:24.452612] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:21.335 [2024-11-19 12:08:24.452620] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:21.335 [2024-11-19 12:08:24.453070] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:21.335 [2024-11-19 12:08:24.453087] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:21.335 [2024-11-19 12:08:24.453161] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:21.335 [2024-11-19 12:08:24.453182] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:21.335 pt2 00:16:21.335 12:08:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.335 12:08:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:21.336 12:08:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:21.336 12:08:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:21.336 12:08:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.336 12:08:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.336 [2024-11-19 12:08:24.460519] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:21.336 [2024-11-19 12:08:24.460618] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:21.336 [2024-11-19 12:08:24.460638] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:21.336 [2024-11-19 12:08:24.460646] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:21.336 [2024-11-19 12:08:24.460983] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:21.336 [2024-11-19 12:08:24.461021] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:21.336 [2024-11-19 12:08:24.461080] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:21.336 [2024-11-19 12:08:24.461097] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:21.336 pt3 00:16:21.336 12:08:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.336 12:08:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:21.336 12:08:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:21.336 12:08:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:21.336 12:08:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.336 12:08:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.336 [2024-11-19 12:08:24.468482] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:21.336 [2024-11-19 12:08:24.468526] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:21.336 [2024-11-19 12:08:24.468544] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:21.336 [2024-11-19 12:08:24.468551] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:21.336 [2024-11-19 12:08:24.468891] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:21.336 [2024-11-19 12:08:24.468905] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:21.336 [2024-11-19 12:08:24.468957] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:21.336 [2024-11-19 12:08:24.468972] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:21.336 [2024-11-19 12:08:24.469121] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:21.336 [2024-11-19 12:08:24.469130] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:21.336 [2024-11-19 12:08:24.469351] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:21.336 [2024-11-19 12:08:24.476043] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:21.336 [2024-11-19 12:08:24.476064] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:21.336 [2024-11-19 12:08:24.476228] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:21.336 pt4 00:16:21.336 12:08:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.336 12:08:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:21.336 12:08:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:21.336 12:08:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:21.336 12:08:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:21.336 12:08:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:21.336 12:08:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:21.336 12:08:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:21.336 12:08:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:21.336 12:08:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:21.336 12:08:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:21.336 12:08:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:21.336 12:08:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:21.336 12:08:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.336 12:08:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.336 12:08:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.336 12:08:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.336 12:08:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.336 12:08:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:21.336 "name": "raid_bdev1", 00:16:21.336 "uuid": "5a7fcb9a-d9ba-4704-8d90-ea8366d06811", 00:16:21.336 "strip_size_kb": 64, 00:16:21.336 "state": "online", 00:16:21.336 "raid_level": "raid5f", 00:16:21.336 "superblock": true, 00:16:21.336 "num_base_bdevs": 4, 00:16:21.336 "num_base_bdevs_discovered": 4, 00:16:21.336 "num_base_bdevs_operational": 4, 00:16:21.336 "base_bdevs_list": [ 00:16:21.336 { 00:16:21.336 "name": "pt1", 00:16:21.336 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:21.336 "is_configured": true, 00:16:21.336 "data_offset": 2048, 00:16:21.336 "data_size": 63488 00:16:21.336 }, 00:16:21.336 { 00:16:21.336 "name": "pt2", 00:16:21.336 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:21.336 "is_configured": true, 00:16:21.336 "data_offset": 2048, 00:16:21.336 "data_size": 63488 00:16:21.336 }, 00:16:21.336 { 00:16:21.336 "name": "pt3", 00:16:21.336 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:21.336 "is_configured": true, 00:16:21.336 "data_offset": 2048, 00:16:21.336 "data_size": 63488 00:16:21.336 }, 00:16:21.336 { 00:16:21.336 "name": "pt4", 00:16:21.336 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:21.336 "is_configured": true, 00:16:21.336 "data_offset": 2048, 00:16:21.336 "data_size": 63488 00:16:21.336 } 00:16:21.336 ] 00:16:21.336 }' 00:16:21.336 12:08:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:21.336 12:08:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.596 12:08:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:21.596 12:08:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:21.596 12:08:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:21.596 12:08:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:21.596 12:08:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:21.596 12:08:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:21.596 12:08:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:21.596 12:08:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:21.596 12:08:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.596 12:08:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.596 [2024-11-19 12:08:24.939845] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:21.596 12:08:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.596 12:08:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:21.596 "name": "raid_bdev1", 00:16:21.596 "aliases": [ 00:16:21.596 "5a7fcb9a-d9ba-4704-8d90-ea8366d06811" 00:16:21.596 ], 00:16:21.596 "product_name": "Raid Volume", 00:16:21.596 "block_size": 512, 00:16:21.596 "num_blocks": 190464, 00:16:21.596 "uuid": "5a7fcb9a-d9ba-4704-8d90-ea8366d06811", 00:16:21.596 "assigned_rate_limits": { 00:16:21.596 "rw_ios_per_sec": 0, 00:16:21.596 "rw_mbytes_per_sec": 0, 00:16:21.596 "r_mbytes_per_sec": 0, 00:16:21.596 "w_mbytes_per_sec": 0 00:16:21.596 }, 00:16:21.596 "claimed": false, 00:16:21.596 "zoned": false, 00:16:21.596 "supported_io_types": { 00:16:21.596 "read": true, 00:16:21.596 "write": true, 00:16:21.596 "unmap": false, 00:16:21.596 "flush": false, 00:16:21.596 "reset": true, 00:16:21.596 "nvme_admin": false, 00:16:21.596 "nvme_io": false, 00:16:21.596 "nvme_io_md": false, 00:16:21.596 "write_zeroes": true, 00:16:21.596 "zcopy": false, 00:16:21.596 "get_zone_info": false, 00:16:21.596 "zone_management": false, 00:16:21.596 "zone_append": false, 00:16:21.596 "compare": false, 00:16:21.596 "compare_and_write": false, 00:16:21.596 "abort": false, 00:16:21.596 "seek_hole": false, 00:16:21.596 "seek_data": false, 00:16:21.596 "copy": false, 00:16:21.596 "nvme_iov_md": false 00:16:21.596 }, 00:16:21.596 "driver_specific": { 00:16:21.596 "raid": { 00:16:21.596 "uuid": "5a7fcb9a-d9ba-4704-8d90-ea8366d06811", 00:16:21.596 "strip_size_kb": 64, 00:16:21.596 "state": "online", 00:16:21.596 "raid_level": "raid5f", 00:16:21.596 "superblock": true, 00:16:21.596 "num_base_bdevs": 4, 00:16:21.596 "num_base_bdevs_discovered": 4, 00:16:21.596 "num_base_bdevs_operational": 4, 00:16:21.596 "base_bdevs_list": [ 00:16:21.596 { 00:16:21.596 "name": "pt1", 00:16:21.596 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:21.596 "is_configured": true, 00:16:21.596 "data_offset": 2048, 00:16:21.596 "data_size": 63488 00:16:21.596 }, 00:16:21.596 { 00:16:21.596 "name": "pt2", 00:16:21.596 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:21.596 "is_configured": true, 00:16:21.596 "data_offset": 2048, 00:16:21.596 "data_size": 63488 00:16:21.596 }, 00:16:21.596 { 00:16:21.596 "name": "pt3", 00:16:21.596 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:21.596 "is_configured": true, 00:16:21.596 "data_offset": 2048, 00:16:21.596 "data_size": 63488 00:16:21.596 }, 00:16:21.596 { 00:16:21.596 "name": "pt4", 00:16:21.596 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:21.596 "is_configured": true, 00:16:21.596 "data_offset": 2048, 00:16:21.596 "data_size": 63488 00:16:21.596 } 00:16:21.596 ] 00:16:21.596 } 00:16:21.596 } 00:16:21.596 }' 00:16:21.596 12:08:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:21.856 12:08:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:21.856 pt2 00:16:21.856 pt3 00:16:21.856 pt4' 00:16:21.856 12:08:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:21.856 12:08:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:21.856 12:08:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:21.856 12:08:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:21.856 12:08:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:21.856 12:08:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.856 12:08:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.856 12:08:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.856 12:08:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:21.856 12:08:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:21.856 12:08:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:21.856 12:08:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:21.856 12:08:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.856 12:08:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:21.856 12:08:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.856 12:08:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.856 12:08:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:21.856 12:08:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:21.856 12:08:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:21.856 12:08:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:21.856 12:08:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:21.856 12:08:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.856 12:08:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.856 12:08:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.856 12:08:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:21.856 12:08:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:21.856 12:08:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:21.856 12:08:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:21.856 12:08:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.856 12:08:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.856 12:08:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:21.856 12:08:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.115 12:08:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:22.115 12:08:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:22.115 12:08:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:22.115 12:08:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:22.115 12:08:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.115 12:08:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.115 [2024-11-19 12:08:25.255239] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:22.115 12:08:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.115 12:08:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 5a7fcb9a-d9ba-4704-8d90-ea8366d06811 '!=' 5a7fcb9a-d9ba-4704-8d90-ea8366d06811 ']' 00:16:22.115 12:08:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:16:22.115 12:08:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:22.115 12:08:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:22.115 12:08:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:22.115 12:08:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.115 12:08:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.115 [2024-11-19 12:08:25.287063] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:22.116 12:08:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.116 12:08:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:22.116 12:08:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:22.116 12:08:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:22.116 12:08:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:22.116 12:08:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:22.116 12:08:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:22.116 12:08:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.116 12:08:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.116 12:08:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.116 12:08:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.116 12:08:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.116 12:08:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.116 12:08:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.116 12:08:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.116 12:08:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.116 12:08:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.116 "name": "raid_bdev1", 00:16:22.116 "uuid": "5a7fcb9a-d9ba-4704-8d90-ea8366d06811", 00:16:22.116 "strip_size_kb": 64, 00:16:22.116 "state": "online", 00:16:22.116 "raid_level": "raid5f", 00:16:22.116 "superblock": true, 00:16:22.116 "num_base_bdevs": 4, 00:16:22.116 "num_base_bdevs_discovered": 3, 00:16:22.116 "num_base_bdevs_operational": 3, 00:16:22.116 "base_bdevs_list": [ 00:16:22.116 { 00:16:22.116 "name": null, 00:16:22.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.116 "is_configured": false, 00:16:22.116 "data_offset": 0, 00:16:22.116 "data_size": 63488 00:16:22.116 }, 00:16:22.116 { 00:16:22.116 "name": "pt2", 00:16:22.116 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:22.116 "is_configured": true, 00:16:22.116 "data_offset": 2048, 00:16:22.116 "data_size": 63488 00:16:22.116 }, 00:16:22.116 { 00:16:22.116 "name": "pt3", 00:16:22.116 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:22.116 "is_configured": true, 00:16:22.116 "data_offset": 2048, 00:16:22.116 "data_size": 63488 00:16:22.116 }, 00:16:22.116 { 00:16:22.116 "name": "pt4", 00:16:22.116 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:22.116 "is_configured": true, 00:16:22.116 "data_offset": 2048, 00:16:22.116 "data_size": 63488 00:16:22.116 } 00:16:22.116 ] 00:16:22.116 }' 00:16:22.116 12:08:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.116 12:08:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.375 12:08:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:22.375 12:08:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.375 12:08:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.375 [2024-11-19 12:08:25.726255] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:22.375 [2024-11-19 12:08:25.726286] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:22.375 [2024-11-19 12:08:25.726356] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:22.375 [2024-11-19 12:08:25.726433] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:22.375 [2024-11-19 12:08:25.726447] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:22.375 12:08:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.375 12:08:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:22.375 12:08:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.375 12:08:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.375 12:08:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.375 12:08:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.635 12:08:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:22.635 12:08:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:22.635 12:08:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:22.635 12:08:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:22.635 12:08:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:22.635 12:08:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.635 12:08:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.635 12:08:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.635 12:08:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:22.635 12:08:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:22.635 12:08:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:16:22.635 12:08:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.635 12:08:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.635 12:08:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.635 12:08:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:22.635 12:08:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:22.635 12:08:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:16:22.635 12:08:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.635 12:08:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.635 12:08:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.635 12:08:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:22.635 12:08:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:22.635 12:08:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:22.635 12:08:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:22.635 12:08:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:22.635 12:08:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.635 12:08:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.635 [2024-11-19 12:08:25.802103] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:22.635 [2024-11-19 12:08:25.802151] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:22.635 [2024-11-19 12:08:25.802167] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:16:22.635 [2024-11-19 12:08:25.802176] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:22.635 [2024-11-19 12:08:25.804273] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:22.635 [2024-11-19 12:08:25.804309] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:22.635 [2024-11-19 12:08:25.804382] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:22.635 [2024-11-19 12:08:25.804439] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:22.635 pt2 00:16:22.635 12:08:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.635 12:08:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:22.635 12:08:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:22.635 12:08:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:22.636 12:08:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:22.636 12:08:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:22.636 12:08:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:22.636 12:08:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.636 12:08:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.636 12:08:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.636 12:08:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.636 12:08:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.636 12:08:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.636 12:08:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.636 12:08:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.636 12:08:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.636 12:08:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.636 "name": "raid_bdev1", 00:16:22.636 "uuid": "5a7fcb9a-d9ba-4704-8d90-ea8366d06811", 00:16:22.636 "strip_size_kb": 64, 00:16:22.636 "state": "configuring", 00:16:22.636 "raid_level": "raid5f", 00:16:22.636 "superblock": true, 00:16:22.636 "num_base_bdevs": 4, 00:16:22.636 "num_base_bdevs_discovered": 1, 00:16:22.636 "num_base_bdevs_operational": 3, 00:16:22.636 "base_bdevs_list": [ 00:16:22.636 { 00:16:22.636 "name": null, 00:16:22.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.636 "is_configured": false, 00:16:22.636 "data_offset": 2048, 00:16:22.636 "data_size": 63488 00:16:22.636 }, 00:16:22.636 { 00:16:22.636 "name": "pt2", 00:16:22.636 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:22.636 "is_configured": true, 00:16:22.636 "data_offset": 2048, 00:16:22.636 "data_size": 63488 00:16:22.636 }, 00:16:22.636 { 00:16:22.636 "name": null, 00:16:22.636 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:22.636 "is_configured": false, 00:16:22.636 "data_offset": 2048, 00:16:22.636 "data_size": 63488 00:16:22.636 }, 00:16:22.636 { 00:16:22.636 "name": null, 00:16:22.636 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:22.636 "is_configured": false, 00:16:22.636 "data_offset": 2048, 00:16:22.636 "data_size": 63488 00:16:22.636 } 00:16:22.636 ] 00:16:22.636 }' 00:16:22.636 12:08:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.636 12:08:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.895 12:08:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:22.895 12:08:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:22.895 12:08:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:22.895 12:08:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.895 12:08:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.895 [2024-11-19 12:08:26.233426] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:22.895 [2024-11-19 12:08:26.233492] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:22.896 [2024-11-19 12:08:26.233515] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:16:22.896 [2024-11-19 12:08:26.233527] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:22.896 [2024-11-19 12:08:26.233953] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:22.896 [2024-11-19 12:08:26.233979] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:22.896 [2024-11-19 12:08:26.234078] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:22.896 [2024-11-19 12:08:26.234108] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:22.896 pt3 00:16:22.896 12:08:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.896 12:08:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:22.896 12:08:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:22.896 12:08:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:22.896 12:08:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:22.896 12:08:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:22.896 12:08:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:22.896 12:08:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.896 12:08:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.896 12:08:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.896 12:08:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.896 12:08:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.896 12:08:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.896 12:08:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.896 12:08:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.896 12:08:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.155 12:08:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.155 "name": "raid_bdev1", 00:16:23.155 "uuid": "5a7fcb9a-d9ba-4704-8d90-ea8366d06811", 00:16:23.155 "strip_size_kb": 64, 00:16:23.155 "state": "configuring", 00:16:23.155 "raid_level": "raid5f", 00:16:23.155 "superblock": true, 00:16:23.155 "num_base_bdevs": 4, 00:16:23.155 "num_base_bdevs_discovered": 2, 00:16:23.155 "num_base_bdevs_operational": 3, 00:16:23.155 "base_bdevs_list": [ 00:16:23.155 { 00:16:23.155 "name": null, 00:16:23.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.155 "is_configured": false, 00:16:23.155 "data_offset": 2048, 00:16:23.155 "data_size": 63488 00:16:23.155 }, 00:16:23.155 { 00:16:23.155 "name": "pt2", 00:16:23.155 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:23.155 "is_configured": true, 00:16:23.155 "data_offset": 2048, 00:16:23.155 "data_size": 63488 00:16:23.155 }, 00:16:23.155 { 00:16:23.155 "name": "pt3", 00:16:23.155 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:23.155 "is_configured": true, 00:16:23.155 "data_offset": 2048, 00:16:23.155 "data_size": 63488 00:16:23.155 }, 00:16:23.155 { 00:16:23.155 "name": null, 00:16:23.155 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:23.155 "is_configured": false, 00:16:23.155 "data_offset": 2048, 00:16:23.155 "data_size": 63488 00:16:23.155 } 00:16:23.155 ] 00:16:23.155 }' 00:16:23.155 12:08:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.155 12:08:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.414 12:08:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:23.414 12:08:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:23.415 12:08:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:16:23.415 12:08:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:23.415 12:08:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.415 12:08:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.415 [2024-11-19 12:08:26.720550] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:23.415 [2024-11-19 12:08:26.720604] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:23.415 [2024-11-19 12:08:26.720624] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:23.415 [2024-11-19 12:08:26.720633] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:23.415 [2024-11-19 12:08:26.721066] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:23.415 [2024-11-19 12:08:26.721091] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:23.415 [2024-11-19 12:08:26.721169] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:23.415 [2024-11-19 12:08:26.721190] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:23.415 [2024-11-19 12:08:26.721324] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:23.415 [2024-11-19 12:08:26.721338] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:23.415 [2024-11-19 12:08:26.721561] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:23.415 [2024-11-19 12:08:26.728162] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:23.415 [2024-11-19 12:08:26.728190] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:16:23.415 [2024-11-19 12:08:26.728467] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:23.415 pt4 00:16:23.415 12:08:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.415 12:08:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:23.415 12:08:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:23.415 12:08:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:23.415 12:08:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:23.415 12:08:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:23.415 12:08:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:23.415 12:08:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.415 12:08:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.415 12:08:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.415 12:08:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.415 12:08:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.415 12:08:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.415 12:08:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.415 12:08:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.415 12:08:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.415 12:08:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.415 "name": "raid_bdev1", 00:16:23.415 "uuid": "5a7fcb9a-d9ba-4704-8d90-ea8366d06811", 00:16:23.415 "strip_size_kb": 64, 00:16:23.415 "state": "online", 00:16:23.415 "raid_level": "raid5f", 00:16:23.415 "superblock": true, 00:16:23.415 "num_base_bdevs": 4, 00:16:23.415 "num_base_bdevs_discovered": 3, 00:16:23.415 "num_base_bdevs_operational": 3, 00:16:23.415 "base_bdevs_list": [ 00:16:23.415 { 00:16:23.415 "name": null, 00:16:23.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.415 "is_configured": false, 00:16:23.415 "data_offset": 2048, 00:16:23.415 "data_size": 63488 00:16:23.415 }, 00:16:23.415 { 00:16:23.415 "name": "pt2", 00:16:23.415 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:23.415 "is_configured": true, 00:16:23.415 "data_offset": 2048, 00:16:23.415 "data_size": 63488 00:16:23.415 }, 00:16:23.415 { 00:16:23.415 "name": "pt3", 00:16:23.415 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:23.415 "is_configured": true, 00:16:23.415 "data_offset": 2048, 00:16:23.415 "data_size": 63488 00:16:23.415 }, 00:16:23.415 { 00:16:23.415 "name": "pt4", 00:16:23.415 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:23.415 "is_configured": true, 00:16:23.415 "data_offset": 2048, 00:16:23.415 "data_size": 63488 00:16:23.415 } 00:16:23.415 ] 00:16:23.415 }' 00:16:23.415 12:08:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.415 12:08:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.984 12:08:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:23.984 12:08:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.984 12:08:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.984 [2024-11-19 12:08:27.160137] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:23.984 [2024-11-19 12:08:27.160169] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:23.984 [2024-11-19 12:08:27.160262] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:23.984 [2024-11-19 12:08:27.160354] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:23.984 [2024-11-19 12:08:27.160376] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:16:23.984 12:08:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.984 12:08:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:23.984 12:08:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.984 12:08:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.984 12:08:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.984 12:08:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.984 12:08:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:23.984 12:08:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:23.984 12:08:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:16:23.984 12:08:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:16:23.984 12:08:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:16:23.984 12:08:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.984 12:08:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.984 12:08:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.984 12:08:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:23.984 12:08:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.984 12:08:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.984 [2024-11-19 12:08:27.232009] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:23.984 [2024-11-19 12:08:27.232076] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:23.984 [2024-11-19 12:08:27.232101] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:16:23.984 [2024-11-19 12:08:27.232113] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:23.984 [2024-11-19 12:08:27.234484] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:23.984 [2024-11-19 12:08:27.234525] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:23.984 [2024-11-19 12:08:27.234604] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:23.984 [2024-11-19 12:08:27.234657] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:23.984 [2024-11-19 12:08:27.234795] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:23.984 [2024-11-19 12:08:27.234814] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:23.984 [2024-11-19 12:08:27.234829] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:16:23.984 [2024-11-19 12:08:27.234890] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:23.984 [2024-11-19 12:08:27.235014] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:23.984 pt1 00:16:23.984 12:08:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.984 12:08:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:16:23.984 12:08:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:23.984 12:08:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:23.984 12:08:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:23.984 12:08:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:23.984 12:08:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:23.984 12:08:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:23.984 12:08:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.984 12:08:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.984 12:08:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.984 12:08:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.984 12:08:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.984 12:08:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.984 12:08:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.984 12:08:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.984 12:08:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.984 12:08:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.984 "name": "raid_bdev1", 00:16:23.984 "uuid": "5a7fcb9a-d9ba-4704-8d90-ea8366d06811", 00:16:23.984 "strip_size_kb": 64, 00:16:23.984 "state": "configuring", 00:16:23.984 "raid_level": "raid5f", 00:16:23.984 "superblock": true, 00:16:23.984 "num_base_bdevs": 4, 00:16:23.984 "num_base_bdevs_discovered": 2, 00:16:23.984 "num_base_bdevs_operational": 3, 00:16:23.984 "base_bdevs_list": [ 00:16:23.984 { 00:16:23.984 "name": null, 00:16:23.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.984 "is_configured": false, 00:16:23.984 "data_offset": 2048, 00:16:23.984 "data_size": 63488 00:16:23.984 }, 00:16:23.984 { 00:16:23.984 "name": "pt2", 00:16:23.984 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:23.984 "is_configured": true, 00:16:23.984 "data_offset": 2048, 00:16:23.984 "data_size": 63488 00:16:23.984 }, 00:16:23.984 { 00:16:23.984 "name": "pt3", 00:16:23.984 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:23.984 "is_configured": true, 00:16:23.984 "data_offset": 2048, 00:16:23.984 "data_size": 63488 00:16:23.984 }, 00:16:23.984 { 00:16:23.984 "name": null, 00:16:23.984 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:23.984 "is_configured": false, 00:16:23.985 "data_offset": 2048, 00:16:23.985 "data_size": 63488 00:16:23.985 } 00:16:23.985 ] 00:16:23.985 }' 00:16:23.985 12:08:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.985 12:08:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.554 12:08:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:16:24.554 12:08:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.554 12:08:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.554 12:08:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:24.554 12:08:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.554 12:08:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:16:24.554 12:08:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:24.554 12:08:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.554 12:08:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.554 [2024-11-19 12:08:27.735243] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:24.554 [2024-11-19 12:08:27.735301] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:24.554 [2024-11-19 12:08:27.735325] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:16:24.554 [2024-11-19 12:08:27.735335] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:24.554 [2024-11-19 12:08:27.735773] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:24.554 [2024-11-19 12:08:27.735804] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:24.554 [2024-11-19 12:08:27.735884] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:24.554 [2024-11-19 12:08:27.735918] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:24.554 [2024-11-19 12:08:27.736078] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:16:24.554 [2024-11-19 12:08:27.736094] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:24.554 [2024-11-19 12:08:27.736345] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:24.554 [2024-11-19 12:08:27.743415] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:16:24.554 [2024-11-19 12:08:27.743443] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:16:24.554 [2024-11-19 12:08:27.743713] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:24.554 pt4 00:16:24.554 12:08:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.554 12:08:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:24.554 12:08:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:24.554 12:08:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:24.554 12:08:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:24.554 12:08:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:24.554 12:08:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:24.554 12:08:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:24.554 12:08:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:24.554 12:08:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:24.554 12:08:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:24.554 12:08:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.554 12:08:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.554 12:08:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.554 12:08:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:24.554 12:08:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.554 12:08:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.554 "name": "raid_bdev1", 00:16:24.554 "uuid": "5a7fcb9a-d9ba-4704-8d90-ea8366d06811", 00:16:24.554 "strip_size_kb": 64, 00:16:24.554 "state": "online", 00:16:24.554 "raid_level": "raid5f", 00:16:24.554 "superblock": true, 00:16:24.554 "num_base_bdevs": 4, 00:16:24.554 "num_base_bdevs_discovered": 3, 00:16:24.554 "num_base_bdevs_operational": 3, 00:16:24.554 "base_bdevs_list": [ 00:16:24.554 { 00:16:24.554 "name": null, 00:16:24.554 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.554 "is_configured": false, 00:16:24.554 "data_offset": 2048, 00:16:24.554 "data_size": 63488 00:16:24.554 }, 00:16:24.554 { 00:16:24.554 "name": "pt2", 00:16:24.554 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:24.554 "is_configured": true, 00:16:24.554 "data_offset": 2048, 00:16:24.554 "data_size": 63488 00:16:24.554 }, 00:16:24.554 { 00:16:24.554 "name": "pt3", 00:16:24.554 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:24.554 "is_configured": true, 00:16:24.554 "data_offset": 2048, 00:16:24.554 "data_size": 63488 00:16:24.554 }, 00:16:24.554 { 00:16:24.554 "name": "pt4", 00:16:24.554 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:24.554 "is_configured": true, 00:16:24.554 "data_offset": 2048, 00:16:24.554 "data_size": 63488 00:16:24.554 } 00:16:24.554 ] 00:16:24.554 }' 00:16:24.554 12:08:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.554 12:08:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.122 12:08:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:25.122 12:08:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.122 12:08:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:25.122 12:08:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.122 12:08:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.122 12:08:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:25.122 12:08:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:25.122 12:08:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:25.122 12:08:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.122 12:08:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.122 [2024-11-19 12:08:28.247762] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:25.122 12:08:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.122 12:08:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 5a7fcb9a-d9ba-4704-8d90-ea8366d06811 '!=' 5a7fcb9a-d9ba-4704-8d90-ea8366d06811 ']' 00:16:25.122 12:08:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84033 00:16:25.122 12:08:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 84033 ']' 00:16:25.122 12:08:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 84033 00:16:25.122 12:08:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:16:25.122 12:08:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:25.122 12:08:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84033 00:16:25.122 killing process with pid 84033 00:16:25.122 12:08:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:25.122 12:08:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:25.122 12:08:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84033' 00:16:25.122 12:08:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 84033 00:16:25.122 [2024-11-19 12:08:28.312004] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:25.122 [2024-11-19 12:08:28.312097] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:25.122 [2024-11-19 12:08:28.312173] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:25.122 [2024-11-19 12:08:28.312186] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:16:25.122 12:08:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 84033 00:16:25.381 [2024-11-19 12:08:28.688706] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:26.765 12:08:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:16:26.765 00:16:26.765 real 0m8.360s 00:16:26.765 user 0m13.208s 00:16:26.765 sys 0m1.484s 00:16:26.765 12:08:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:26.765 12:08:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.765 ************************************ 00:16:26.765 END TEST raid5f_superblock_test 00:16:26.765 ************************************ 00:16:26.765 12:08:29 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:16:26.765 12:08:29 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:16:26.765 12:08:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:26.765 12:08:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:26.765 12:08:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:26.765 ************************************ 00:16:26.765 START TEST raid5f_rebuild_test 00:16:26.765 ************************************ 00:16:26.765 12:08:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:16:26.765 12:08:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:26.765 12:08:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:26.765 12:08:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:26.765 12:08:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:26.765 12:08:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:26.765 12:08:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:26.765 12:08:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:26.765 12:08:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:26.765 12:08:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:26.765 12:08:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:26.765 12:08:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:26.765 12:08:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:26.765 12:08:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:26.765 12:08:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:26.765 12:08:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:26.765 12:08:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:26.765 12:08:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:26.765 12:08:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:26.765 12:08:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:26.765 12:08:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:26.765 12:08:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:26.765 12:08:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:26.765 12:08:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:26.765 12:08:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:26.765 12:08:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:26.765 12:08:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:26.765 12:08:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:26.765 12:08:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:26.765 12:08:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:26.765 12:08:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:26.765 12:08:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:26.765 12:08:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=84522 00:16:26.765 12:08:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 84522 00:16:26.765 12:08:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:26.765 12:08:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 84522 ']' 00:16:26.765 12:08:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:26.765 12:08:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:26.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:26.765 12:08:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:26.765 12:08:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:26.765 12:08:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.765 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:26.765 Zero copy mechanism will not be used. 00:16:26.765 [2024-11-19 12:08:29.948523] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:16:26.765 [2024-11-19 12:08:29.948638] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84522 ] 00:16:26.765 [2024-11-19 12:08:30.119472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:27.025 [2024-11-19 12:08:30.234375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:27.285 [2024-11-19 12:08:30.434336] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:27.286 [2024-11-19 12:08:30.434398] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:27.546 12:08:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:27.546 12:08:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:16:27.546 12:08:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:27.546 12:08:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:27.546 12:08:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.546 12:08:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.546 BaseBdev1_malloc 00:16:27.546 12:08:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.546 12:08:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:27.546 12:08:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.546 12:08:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.546 [2024-11-19 12:08:30.823121] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:27.546 [2024-11-19 12:08:30.823197] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:27.546 [2024-11-19 12:08:30.823237] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:27.546 [2024-11-19 12:08:30.823248] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:27.546 [2024-11-19 12:08:30.825265] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:27.546 [2024-11-19 12:08:30.825321] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:27.546 BaseBdev1 00:16:27.546 12:08:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.546 12:08:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:27.546 12:08:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:27.546 12:08:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.546 12:08:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.546 BaseBdev2_malloc 00:16:27.546 12:08:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.546 12:08:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:27.546 12:08:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.546 12:08:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.546 [2024-11-19 12:08:30.877574] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:27.546 [2024-11-19 12:08:30.877631] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:27.546 [2024-11-19 12:08:30.877651] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:27.546 [2024-11-19 12:08:30.877662] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:27.546 [2024-11-19 12:08:30.879622] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:27.546 [2024-11-19 12:08:30.879661] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:27.546 BaseBdev2 00:16:27.546 12:08:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.546 12:08:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:27.546 12:08:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:27.546 12:08:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.546 12:08:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.807 BaseBdev3_malloc 00:16:27.807 12:08:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.807 12:08:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:27.807 12:08:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.807 12:08:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.807 [2024-11-19 12:08:30.966801] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:27.807 [2024-11-19 12:08:30.966858] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:27.807 [2024-11-19 12:08:30.966880] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:27.807 [2024-11-19 12:08:30.966891] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:27.807 [2024-11-19 12:08:30.968934] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:27.807 [2024-11-19 12:08:30.968979] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:27.807 BaseBdev3 00:16:27.807 12:08:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.807 12:08:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:27.807 12:08:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:27.807 12:08:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.807 12:08:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.807 BaseBdev4_malloc 00:16:27.807 12:08:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.807 12:08:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:27.807 12:08:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.807 12:08:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.807 [2024-11-19 12:08:31.020573] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:27.807 [2024-11-19 12:08:31.020628] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:27.807 [2024-11-19 12:08:31.020646] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:27.807 [2024-11-19 12:08:31.020656] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:27.807 [2024-11-19 12:08:31.022613] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:27.807 [2024-11-19 12:08:31.022652] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:27.807 BaseBdev4 00:16:27.807 12:08:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.807 12:08:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:27.807 12:08:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.807 12:08:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.807 spare_malloc 00:16:27.807 12:08:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.807 12:08:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:27.807 12:08:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.807 12:08:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.807 spare_delay 00:16:27.807 12:08:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.807 12:08:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:27.807 12:08:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.807 12:08:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.807 [2024-11-19 12:08:31.086456] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:27.807 [2024-11-19 12:08:31.086528] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:27.807 [2024-11-19 12:08:31.086547] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:27.807 [2024-11-19 12:08:31.086557] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:27.807 [2024-11-19 12:08:31.088539] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:27.807 [2024-11-19 12:08:31.088578] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:27.807 spare 00:16:27.807 12:08:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.807 12:08:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:27.807 12:08:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.807 12:08:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.807 [2024-11-19 12:08:31.098486] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:27.807 [2024-11-19 12:08:31.100232] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:27.807 [2024-11-19 12:08:31.100299] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:27.807 [2024-11-19 12:08:31.100350] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:27.807 [2024-11-19 12:08:31.100435] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:27.807 [2024-11-19 12:08:31.100455] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:27.807 [2024-11-19 12:08:31.100683] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:27.807 [2024-11-19 12:08:31.107806] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:27.807 [2024-11-19 12:08:31.107828] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:27.807 [2024-11-19 12:08:31.108026] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:27.807 12:08:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.807 12:08:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:27.807 12:08:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:27.807 12:08:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:27.807 12:08:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:27.807 12:08:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:27.807 12:08:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:27.807 12:08:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:27.807 12:08:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:27.807 12:08:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:27.807 12:08:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:27.807 12:08:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.807 12:08:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.807 12:08:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.807 12:08:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.807 12:08:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.807 12:08:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:27.807 "name": "raid_bdev1", 00:16:27.807 "uuid": "0173a964-b7a5-40af-b38b-9b03f8ac950d", 00:16:27.807 "strip_size_kb": 64, 00:16:27.807 "state": "online", 00:16:27.807 "raid_level": "raid5f", 00:16:27.807 "superblock": false, 00:16:27.807 "num_base_bdevs": 4, 00:16:27.807 "num_base_bdevs_discovered": 4, 00:16:27.807 "num_base_bdevs_operational": 4, 00:16:27.807 "base_bdevs_list": [ 00:16:27.807 { 00:16:27.807 "name": "BaseBdev1", 00:16:27.807 "uuid": "8ea75684-e7d9-549f-8f64-c12d2f099cc2", 00:16:27.807 "is_configured": true, 00:16:27.807 "data_offset": 0, 00:16:27.807 "data_size": 65536 00:16:27.807 }, 00:16:27.807 { 00:16:27.807 "name": "BaseBdev2", 00:16:27.807 "uuid": "3e001658-c42f-5031-8ab1-5e952d861642", 00:16:27.807 "is_configured": true, 00:16:27.807 "data_offset": 0, 00:16:27.807 "data_size": 65536 00:16:27.807 }, 00:16:27.807 { 00:16:27.807 "name": "BaseBdev3", 00:16:27.807 "uuid": "edc6c0d6-1481-59e8-bde4-a69d00b60f62", 00:16:27.807 "is_configured": true, 00:16:27.807 "data_offset": 0, 00:16:27.807 "data_size": 65536 00:16:27.807 }, 00:16:27.807 { 00:16:27.807 "name": "BaseBdev4", 00:16:27.807 "uuid": "ce3ac071-458e-5f31-abe7-d5c032a55f48", 00:16:27.807 "is_configured": true, 00:16:27.807 "data_offset": 0, 00:16:27.807 "data_size": 65536 00:16:27.807 } 00:16:27.807 ] 00:16:27.807 }' 00:16:27.807 12:08:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:27.807 12:08:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.377 12:08:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:28.377 12:08:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:28.377 12:08:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.377 12:08:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.378 [2024-11-19 12:08:31.567851] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:28.378 12:08:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.378 12:08:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:16:28.378 12:08:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:28.378 12:08:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.378 12:08:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.378 12:08:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.378 12:08:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.378 12:08:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:28.378 12:08:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:28.378 12:08:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:28.378 12:08:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:28.378 12:08:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:28.378 12:08:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:28.378 12:08:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:28.378 12:08:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:28.378 12:08:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:28.378 12:08:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:28.378 12:08:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:28.378 12:08:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:28.378 12:08:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:28.378 12:08:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:28.638 [2024-11-19 12:08:31.795379] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:28.638 /dev/nbd0 00:16:28.638 12:08:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:28.638 12:08:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:28.638 12:08:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:28.638 12:08:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:28.638 12:08:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:28.638 12:08:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:28.638 12:08:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:28.638 12:08:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:28.638 12:08:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:28.638 12:08:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:28.638 12:08:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:28.638 1+0 records in 00:16:28.638 1+0 records out 00:16:28.638 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000378772 s, 10.8 MB/s 00:16:28.638 12:08:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:28.638 12:08:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:28.638 12:08:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:28.638 12:08:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:28.638 12:08:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:28.638 12:08:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:28.638 12:08:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:28.638 12:08:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:28.638 12:08:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:16:28.638 12:08:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:16:28.638 12:08:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:16:29.206 512+0 records in 00:16:29.206 512+0 records out 00:16:29.206 100663296 bytes (101 MB, 96 MiB) copied, 0.539031 s, 187 MB/s 00:16:29.206 12:08:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:29.206 12:08:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:29.206 12:08:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:29.206 12:08:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:29.206 12:08:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:29.206 12:08:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:29.206 12:08:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:29.465 12:08:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:29.465 12:08:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:29.465 12:08:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:29.465 [2024-11-19 12:08:32.608283] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:29.465 12:08:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:29.465 12:08:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:29.465 12:08:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:29.465 12:08:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:29.465 12:08:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:29.465 12:08:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:29.465 12:08:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.465 12:08:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.465 [2024-11-19 12:08:32.618440] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:29.465 12:08:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.465 12:08:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:29.465 12:08:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:29.465 12:08:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:29.465 12:08:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:29.465 12:08:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:29.465 12:08:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:29.465 12:08:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.465 12:08:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.465 12:08:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.465 12:08:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.465 12:08:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.465 12:08:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.465 12:08:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.466 12:08:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.466 12:08:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.466 12:08:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.466 "name": "raid_bdev1", 00:16:29.466 "uuid": "0173a964-b7a5-40af-b38b-9b03f8ac950d", 00:16:29.466 "strip_size_kb": 64, 00:16:29.466 "state": "online", 00:16:29.466 "raid_level": "raid5f", 00:16:29.466 "superblock": false, 00:16:29.466 "num_base_bdevs": 4, 00:16:29.466 "num_base_bdevs_discovered": 3, 00:16:29.466 "num_base_bdevs_operational": 3, 00:16:29.466 "base_bdevs_list": [ 00:16:29.466 { 00:16:29.466 "name": null, 00:16:29.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.466 "is_configured": false, 00:16:29.466 "data_offset": 0, 00:16:29.466 "data_size": 65536 00:16:29.466 }, 00:16:29.466 { 00:16:29.466 "name": "BaseBdev2", 00:16:29.466 "uuid": "3e001658-c42f-5031-8ab1-5e952d861642", 00:16:29.466 "is_configured": true, 00:16:29.466 "data_offset": 0, 00:16:29.466 "data_size": 65536 00:16:29.466 }, 00:16:29.466 { 00:16:29.466 "name": "BaseBdev3", 00:16:29.466 "uuid": "edc6c0d6-1481-59e8-bde4-a69d00b60f62", 00:16:29.466 "is_configured": true, 00:16:29.466 "data_offset": 0, 00:16:29.466 "data_size": 65536 00:16:29.466 }, 00:16:29.466 { 00:16:29.466 "name": "BaseBdev4", 00:16:29.466 "uuid": "ce3ac071-458e-5f31-abe7-d5c032a55f48", 00:16:29.466 "is_configured": true, 00:16:29.466 "data_offset": 0, 00:16:29.466 "data_size": 65536 00:16:29.466 } 00:16:29.466 ] 00:16:29.466 }' 00:16:29.466 12:08:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.466 12:08:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.044 12:08:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:30.044 12:08:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.044 12:08:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.044 [2024-11-19 12:08:33.113625] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:30.044 [2024-11-19 12:08:33.128980] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:16:30.044 12:08:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.044 12:08:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:30.044 [2024-11-19 12:08:33.138160] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:31.044 12:08:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:31.044 12:08:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:31.044 12:08:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:31.044 12:08:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:31.044 12:08:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:31.044 12:08:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.044 12:08:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.044 12:08:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.044 12:08:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.044 12:08:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.044 12:08:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:31.044 "name": "raid_bdev1", 00:16:31.044 "uuid": "0173a964-b7a5-40af-b38b-9b03f8ac950d", 00:16:31.044 "strip_size_kb": 64, 00:16:31.044 "state": "online", 00:16:31.044 "raid_level": "raid5f", 00:16:31.044 "superblock": false, 00:16:31.044 "num_base_bdevs": 4, 00:16:31.044 "num_base_bdevs_discovered": 4, 00:16:31.044 "num_base_bdevs_operational": 4, 00:16:31.044 "process": { 00:16:31.044 "type": "rebuild", 00:16:31.044 "target": "spare", 00:16:31.044 "progress": { 00:16:31.044 "blocks": 19200, 00:16:31.044 "percent": 9 00:16:31.044 } 00:16:31.044 }, 00:16:31.044 "base_bdevs_list": [ 00:16:31.044 { 00:16:31.044 "name": "spare", 00:16:31.044 "uuid": "49c84adf-a1a2-521c-9224-255598ea9f6f", 00:16:31.044 "is_configured": true, 00:16:31.044 "data_offset": 0, 00:16:31.044 "data_size": 65536 00:16:31.044 }, 00:16:31.044 { 00:16:31.044 "name": "BaseBdev2", 00:16:31.044 "uuid": "3e001658-c42f-5031-8ab1-5e952d861642", 00:16:31.044 "is_configured": true, 00:16:31.044 "data_offset": 0, 00:16:31.044 "data_size": 65536 00:16:31.044 }, 00:16:31.044 { 00:16:31.044 "name": "BaseBdev3", 00:16:31.044 "uuid": "edc6c0d6-1481-59e8-bde4-a69d00b60f62", 00:16:31.044 "is_configured": true, 00:16:31.044 "data_offset": 0, 00:16:31.044 "data_size": 65536 00:16:31.044 }, 00:16:31.044 { 00:16:31.044 "name": "BaseBdev4", 00:16:31.044 "uuid": "ce3ac071-458e-5f31-abe7-d5c032a55f48", 00:16:31.044 "is_configured": true, 00:16:31.044 "data_offset": 0, 00:16:31.044 "data_size": 65536 00:16:31.044 } 00:16:31.044 ] 00:16:31.044 }' 00:16:31.044 12:08:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:31.044 12:08:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:31.044 12:08:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:31.044 12:08:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:31.044 12:08:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:31.044 12:08:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.044 12:08:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.044 [2024-11-19 12:08:34.285281] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:31.044 [2024-11-19 12:08:34.344478] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:31.044 [2024-11-19 12:08:34.344567] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:31.044 [2024-11-19 12:08:34.344583] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:31.044 [2024-11-19 12:08:34.344593] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:31.044 12:08:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.044 12:08:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:31.044 12:08:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:31.044 12:08:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:31.044 12:08:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:31.044 12:08:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:31.044 12:08:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:31.044 12:08:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.044 12:08:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.044 12:08:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.044 12:08:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.044 12:08:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.044 12:08:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.044 12:08:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.044 12:08:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.044 12:08:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.303 12:08:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.303 "name": "raid_bdev1", 00:16:31.303 "uuid": "0173a964-b7a5-40af-b38b-9b03f8ac950d", 00:16:31.303 "strip_size_kb": 64, 00:16:31.303 "state": "online", 00:16:31.303 "raid_level": "raid5f", 00:16:31.303 "superblock": false, 00:16:31.304 "num_base_bdevs": 4, 00:16:31.304 "num_base_bdevs_discovered": 3, 00:16:31.304 "num_base_bdevs_operational": 3, 00:16:31.304 "base_bdevs_list": [ 00:16:31.304 { 00:16:31.304 "name": null, 00:16:31.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.304 "is_configured": false, 00:16:31.304 "data_offset": 0, 00:16:31.304 "data_size": 65536 00:16:31.304 }, 00:16:31.304 { 00:16:31.304 "name": "BaseBdev2", 00:16:31.304 "uuid": "3e001658-c42f-5031-8ab1-5e952d861642", 00:16:31.304 "is_configured": true, 00:16:31.304 "data_offset": 0, 00:16:31.304 "data_size": 65536 00:16:31.304 }, 00:16:31.304 { 00:16:31.304 "name": "BaseBdev3", 00:16:31.304 "uuid": "edc6c0d6-1481-59e8-bde4-a69d00b60f62", 00:16:31.304 "is_configured": true, 00:16:31.304 "data_offset": 0, 00:16:31.304 "data_size": 65536 00:16:31.304 }, 00:16:31.304 { 00:16:31.304 "name": "BaseBdev4", 00:16:31.304 "uuid": "ce3ac071-458e-5f31-abe7-d5c032a55f48", 00:16:31.304 "is_configured": true, 00:16:31.304 "data_offset": 0, 00:16:31.304 "data_size": 65536 00:16:31.304 } 00:16:31.304 ] 00:16:31.304 }' 00:16:31.304 12:08:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.304 12:08:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.563 12:08:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:31.563 12:08:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:31.563 12:08:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:31.563 12:08:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:31.563 12:08:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:31.563 12:08:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.563 12:08:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.563 12:08:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.563 12:08:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.563 12:08:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.563 12:08:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:31.563 "name": "raid_bdev1", 00:16:31.563 "uuid": "0173a964-b7a5-40af-b38b-9b03f8ac950d", 00:16:31.563 "strip_size_kb": 64, 00:16:31.563 "state": "online", 00:16:31.563 "raid_level": "raid5f", 00:16:31.563 "superblock": false, 00:16:31.563 "num_base_bdevs": 4, 00:16:31.563 "num_base_bdevs_discovered": 3, 00:16:31.563 "num_base_bdevs_operational": 3, 00:16:31.563 "base_bdevs_list": [ 00:16:31.563 { 00:16:31.563 "name": null, 00:16:31.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.563 "is_configured": false, 00:16:31.563 "data_offset": 0, 00:16:31.563 "data_size": 65536 00:16:31.563 }, 00:16:31.563 { 00:16:31.563 "name": "BaseBdev2", 00:16:31.563 "uuid": "3e001658-c42f-5031-8ab1-5e952d861642", 00:16:31.563 "is_configured": true, 00:16:31.563 "data_offset": 0, 00:16:31.563 "data_size": 65536 00:16:31.563 }, 00:16:31.563 { 00:16:31.563 "name": "BaseBdev3", 00:16:31.563 "uuid": "edc6c0d6-1481-59e8-bde4-a69d00b60f62", 00:16:31.563 "is_configured": true, 00:16:31.563 "data_offset": 0, 00:16:31.563 "data_size": 65536 00:16:31.563 }, 00:16:31.563 { 00:16:31.563 "name": "BaseBdev4", 00:16:31.563 "uuid": "ce3ac071-458e-5f31-abe7-d5c032a55f48", 00:16:31.563 "is_configured": true, 00:16:31.563 "data_offset": 0, 00:16:31.563 "data_size": 65536 00:16:31.563 } 00:16:31.563 ] 00:16:31.563 }' 00:16:31.563 12:08:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:31.563 12:08:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:31.563 12:08:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:31.563 12:08:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:31.563 12:08:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:31.563 12:08:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.563 12:08:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.563 [2024-11-19 12:08:34.925438] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:31.823 [2024-11-19 12:08:34.940328] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:16:31.823 12:08:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.823 12:08:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:31.823 [2024-11-19 12:08:34.949509] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:32.764 12:08:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:32.764 12:08:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:32.764 12:08:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:32.764 12:08:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:32.764 12:08:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:32.764 12:08:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.764 12:08:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.764 12:08:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.764 12:08:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.764 12:08:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.764 12:08:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:32.764 "name": "raid_bdev1", 00:16:32.764 "uuid": "0173a964-b7a5-40af-b38b-9b03f8ac950d", 00:16:32.764 "strip_size_kb": 64, 00:16:32.764 "state": "online", 00:16:32.764 "raid_level": "raid5f", 00:16:32.764 "superblock": false, 00:16:32.764 "num_base_bdevs": 4, 00:16:32.764 "num_base_bdevs_discovered": 4, 00:16:32.764 "num_base_bdevs_operational": 4, 00:16:32.764 "process": { 00:16:32.764 "type": "rebuild", 00:16:32.764 "target": "spare", 00:16:32.764 "progress": { 00:16:32.764 "blocks": 19200, 00:16:32.764 "percent": 9 00:16:32.764 } 00:16:32.764 }, 00:16:32.764 "base_bdevs_list": [ 00:16:32.764 { 00:16:32.764 "name": "spare", 00:16:32.764 "uuid": "49c84adf-a1a2-521c-9224-255598ea9f6f", 00:16:32.764 "is_configured": true, 00:16:32.764 "data_offset": 0, 00:16:32.764 "data_size": 65536 00:16:32.764 }, 00:16:32.764 { 00:16:32.764 "name": "BaseBdev2", 00:16:32.764 "uuid": "3e001658-c42f-5031-8ab1-5e952d861642", 00:16:32.764 "is_configured": true, 00:16:32.764 "data_offset": 0, 00:16:32.764 "data_size": 65536 00:16:32.764 }, 00:16:32.764 { 00:16:32.764 "name": "BaseBdev3", 00:16:32.764 "uuid": "edc6c0d6-1481-59e8-bde4-a69d00b60f62", 00:16:32.764 "is_configured": true, 00:16:32.764 "data_offset": 0, 00:16:32.764 "data_size": 65536 00:16:32.764 }, 00:16:32.764 { 00:16:32.764 "name": "BaseBdev4", 00:16:32.764 "uuid": "ce3ac071-458e-5f31-abe7-d5c032a55f48", 00:16:32.764 "is_configured": true, 00:16:32.764 "data_offset": 0, 00:16:32.764 "data_size": 65536 00:16:32.764 } 00:16:32.764 ] 00:16:32.764 }' 00:16:32.764 12:08:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:32.764 12:08:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:32.764 12:08:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:32.764 12:08:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:32.764 12:08:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:32.764 12:08:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:32.764 12:08:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:32.764 12:08:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=609 00:16:32.764 12:08:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:32.764 12:08:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:32.764 12:08:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:32.764 12:08:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:32.764 12:08:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:32.764 12:08:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:32.764 12:08:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.764 12:08:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.764 12:08:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.764 12:08:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.764 12:08:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.024 12:08:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:33.024 "name": "raid_bdev1", 00:16:33.024 "uuid": "0173a964-b7a5-40af-b38b-9b03f8ac950d", 00:16:33.024 "strip_size_kb": 64, 00:16:33.024 "state": "online", 00:16:33.024 "raid_level": "raid5f", 00:16:33.024 "superblock": false, 00:16:33.024 "num_base_bdevs": 4, 00:16:33.024 "num_base_bdevs_discovered": 4, 00:16:33.024 "num_base_bdevs_operational": 4, 00:16:33.024 "process": { 00:16:33.024 "type": "rebuild", 00:16:33.024 "target": "spare", 00:16:33.024 "progress": { 00:16:33.024 "blocks": 21120, 00:16:33.024 "percent": 10 00:16:33.024 } 00:16:33.024 }, 00:16:33.024 "base_bdevs_list": [ 00:16:33.024 { 00:16:33.024 "name": "spare", 00:16:33.024 "uuid": "49c84adf-a1a2-521c-9224-255598ea9f6f", 00:16:33.024 "is_configured": true, 00:16:33.024 "data_offset": 0, 00:16:33.024 "data_size": 65536 00:16:33.024 }, 00:16:33.024 { 00:16:33.024 "name": "BaseBdev2", 00:16:33.024 "uuid": "3e001658-c42f-5031-8ab1-5e952d861642", 00:16:33.024 "is_configured": true, 00:16:33.024 "data_offset": 0, 00:16:33.024 "data_size": 65536 00:16:33.024 }, 00:16:33.024 { 00:16:33.024 "name": "BaseBdev3", 00:16:33.024 "uuid": "edc6c0d6-1481-59e8-bde4-a69d00b60f62", 00:16:33.024 "is_configured": true, 00:16:33.024 "data_offset": 0, 00:16:33.024 "data_size": 65536 00:16:33.024 }, 00:16:33.024 { 00:16:33.024 "name": "BaseBdev4", 00:16:33.024 "uuid": "ce3ac071-458e-5f31-abe7-d5c032a55f48", 00:16:33.024 "is_configured": true, 00:16:33.024 "data_offset": 0, 00:16:33.024 "data_size": 65536 00:16:33.024 } 00:16:33.024 ] 00:16:33.024 }' 00:16:33.024 12:08:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:33.024 12:08:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:33.024 12:08:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:33.024 12:08:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:33.024 12:08:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:33.962 12:08:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:33.962 12:08:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:33.962 12:08:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:33.962 12:08:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:33.962 12:08:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:33.962 12:08:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:33.962 12:08:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.962 12:08:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.962 12:08:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.962 12:08:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.962 12:08:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.962 12:08:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:33.962 "name": "raid_bdev1", 00:16:33.962 "uuid": "0173a964-b7a5-40af-b38b-9b03f8ac950d", 00:16:33.962 "strip_size_kb": 64, 00:16:33.962 "state": "online", 00:16:33.962 "raid_level": "raid5f", 00:16:33.962 "superblock": false, 00:16:33.962 "num_base_bdevs": 4, 00:16:33.962 "num_base_bdevs_discovered": 4, 00:16:33.962 "num_base_bdevs_operational": 4, 00:16:33.962 "process": { 00:16:33.962 "type": "rebuild", 00:16:33.962 "target": "spare", 00:16:33.962 "progress": { 00:16:33.962 "blocks": 44160, 00:16:33.962 "percent": 22 00:16:33.962 } 00:16:33.962 }, 00:16:33.962 "base_bdevs_list": [ 00:16:33.962 { 00:16:33.962 "name": "spare", 00:16:33.962 "uuid": "49c84adf-a1a2-521c-9224-255598ea9f6f", 00:16:33.962 "is_configured": true, 00:16:33.962 "data_offset": 0, 00:16:33.962 "data_size": 65536 00:16:33.962 }, 00:16:33.962 { 00:16:33.962 "name": "BaseBdev2", 00:16:33.962 "uuid": "3e001658-c42f-5031-8ab1-5e952d861642", 00:16:33.962 "is_configured": true, 00:16:33.962 "data_offset": 0, 00:16:33.962 "data_size": 65536 00:16:33.962 }, 00:16:33.962 { 00:16:33.962 "name": "BaseBdev3", 00:16:33.962 "uuid": "edc6c0d6-1481-59e8-bde4-a69d00b60f62", 00:16:33.962 "is_configured": true, 00:16:33.962 "data_offset": 0, 00:16:33.962 "data_size": 65536 00:16:33.962 }, 00:16:33.962 { 00:16:33.962 "name": "BaseBdev4", 00:16:33.962 "uuid": "ce3ac071-458e-5f31-abe7-d5c032a55f48", 00:16:33.962 "is_configured": true, 00:16:33.962 "data_offset": 0, 00:16:33.962 "data_size": 65536 00:16:33.962 } 00:16:33.962 ] 00:16:33.962 }' 00:16:33.962 12:08:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:34.222 12:08:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:34.222 12:08:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:34.222 12:08:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:34.222 12:08:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:35.160 12:08:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:35.160 12:08:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:35.160 12:08:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:35.160 12:08:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:35.160 12:08:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:35.160 12:08:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:35.160 12:08:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.160 12:08:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.160 12:08:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.160 12:08:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.160 12:08:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.160 12:08:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:35.160 "name": "raid_bdev1", 00:16:35.160 "uuid": "0173a964-b7a5-40af-b38b-9b03f8ac950d", 00:16:35.160 "strip_size_kb": 64, 00:16:35.160 "state": "online", 00:16:35.160 "raid_level": "raid5f", 00:16:35.160 "superblock": false, 00:16:35.160 "num_base_bdevs": 4, 00:16:35.160 "num_base_bdevs_discovered": 4, 00:16:35.160 "num_base_bdevs_operational": 4, 00:16:35.160 "process": { 00:16:35.160 "type": "rebuild", 00:16:35.160 "target": "spare", 00:16:35.160 "progress": { 00:16:35.160 "blocks": 65280, 00:16:35.160 "percent": 33 00:16:35.160 } 00:16:35.160 }, 00:16:35.160 "base_bdevs_list": [ 00:16:35.160 { 00:16:35.160 "name": "spare", 00:16:35.160 "uuid": "49c84adf-a1a2-521c-9224-255598ea9f6f", 00:16:35.160 "is_configured": true, 00:16:35.160 "data_offset": 0, 00:16:35.160 "data_size": 65536 00:16:35.160 }, 00:16:35.160 { 00:16:35.160 "name": "BaseBdev2", 00:16:35.160 "uuid": "3e001658-c42f-5031-8ab1-5e952d861642", 00:16:35.160 "is_configured": true, 00:16:35.160 "data_offset": 0, 00:16:35.160 "data_size": 65536 00:16:35.160 }, 00:16:35.160 { 00:16:35.160 "name": "BaseBdev3", 00:16:35.160 "uuid": "edc6c0d6-1481-59e8-bde4-a69d00b60f62", 00:16:35.160 "is_configured": true, 00:16:35.160 "data_offset": 0, 00:16:35.160 "data_size": 65536 00:16:35.160 }, 00:16:35.160 { 00:16:35.160 "name": "BaseBdev4", 00:16:35.160 "uuid": "ce3ac071-458e-5f31-abe7-d5c032a55f48", 00:16:35.160 "is_configured": true, 00:16:35.160 "data_offset": 0, 00:16:35.160 "data_size": 65536 00:16:35.160 } 00:16:35.160 ] 00:16:35.160 }' 00:16:35.160 12:08:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:35.160 12:08:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:35.160 12:08:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:35.420 12:08:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:35.420 12:08:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:36.365 12:08:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:36.365 12:08:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:36.365 12:08:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:36.365 12:08:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:36.365 12:08:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:36.365 12:08:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:36.365 12:08:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.365 12:08:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.365 12:08:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.365 12:08:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.365 12:08:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.365 12:08:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:36.365 "name": "raid_bdev1", 00:16:36.365 "uuid": "0173a964-b7a5-40af-b38b-9b03f8ac950d", 00:16:36.365 "strip_size_kb": 64, 00:16:36.365 "state": "online", 00:16:36.365 "raid_level": "raid5f", 00:16:36.365 "superblock": false, 00:16:36.365 "num_base_bdevs": 4, 00:16:36.365 "num_base_bdevs_discovered": 4, 00:16:36.365 "num_base_bdevs_operational": 4, 00:16:36.365 "process": { 00:16:36.365 "type": "rebuild", 00:16:36.365 "target": "spare", 00:16:36.365 "progress": { 00:16:36.365 "blocks": 88320, 00:16:36.365 "percent": 44 00:16:36.365 } 00:16:36.365 }, 00:16:36.365 "base_bdevs_list": [ 00:16:36.365 { 00:16:36.365 "name": "spare", 00:16:36.365 "uuid": "49c84adf-a1a2-521c-9224-255598ea9f6f", 00:16:36.365 "is_configured": true, 00:16:36.365 "data_offset": 0, 00:16:36.365 "data_size": 65536 00:16:36.365 }, 00:16:36.365 { 00:16:36.365 "name": "BaseBdev2", 00:16:36.365 "uuid": "3e001658-c42f-5031-8ab1-5e952d861642", 00:16:36.365 "is_configured": true, 00:16:36.365 "data_offset": 0, 00:16:36.365 "data_size": 65536 00:16:36.365 }, 00:16:36.365 { 00:16:36.365 "name": "BaseBdev3", 00:16:36.365 "uuid": "edc6c0d6-1481-59e8-bde4-a69d00b60f62", 00:16:36.365 "is_configured": true, 00:16:36.365 "data_offset": 0, 00:16:36.365 "data_size": 65536 00:16:36.365 }, 00:16:36.365 { 00:16:36.365 "name": "BaseBdev4", 00:16:36.365 "uuid": "ce3ac071-458e-5f31-abe7-d5c032a55f48", 00:16:36.365 "is_configured": true, 00:16:36.365 "data_offset": 0, 00:16:36.365 "data_size": 65536 00:16:36.365 } 00:16:36.365 ] 00:16:36.365 }' 00:16:36.365 12:08:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:36.365 12:08:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:36.365 12:08:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:36.365 12:08:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:36.365 12:08:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:37.744 12:08:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:37.744 12:08:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:37.744 12:08:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:37.744 12:08:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:37.744 12:08:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:37.744 12:08:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:37.744 12:08:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.744 12:08:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.744 12:08:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.744 12:08:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.744 12:08:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.744 12:08:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:37.744 "name": "raid_bdev1", 00:16:37.744 "uuid": "0173a964-b7a5-40af-b38b-9b03f8ac950d", 00:16:37.744 "strip_size_kb": 64, 00:16:37.744 "state": "online", 00:16:37.744 "raid_level": "raid5f", 00:16:37.744 "superblock": false, 00:16:37.744 "num_base_bdevs": 4, 00:16:37.744 "num_base_bdevs_discovered": 4, 00:16:37.744 "num_base_bdevs_operational": 4, 00:16:37.744 "process": { 00:16:37.744 "type": "rebuild", 00:16:37.744 "target": "spare", 00:16:37.744 "progress": { 00:16:37.744 "blocks": 109440, 00:16:37.744 "percent": 55 00:16:37.744 } 00:16:37.744 }, 00:16:37.744 "base_bdevs_list": [ 00:16:37.744 { 00:16:37.744 "name": "spare", 00:16:37.744 "uuid": "49c84adf-a1a2-521c-9224-255598ea9f6f", 00:16:37.744 "is_configured": true, 00:16:37.744 "data_offset": 0, 00:16:37.744 "data_size": 65536 00:16:37.744 }, 00:16:37.744 { 00:16:37.744 "name": "BaseBdev2", 00:16:37.744 "uuid": "3e001658-c42f-5031-8ab1-5e952d861642", 00:16:37.744 "is_configured": true, 00:16:37.744 "data_offset": 0, 00:16:37.744 "data_size": 65536 00:16:37.744 }, 00:16:37.744 { 00:16:37.744 "name": "BaseBdev3", 00:16:37.744 "uuid": "edc6c0d6-1481-59e8-bde4-a69d00b60f62", 00:16:37.744 "is_configured": true, 00:16:37.744 "data_offset": 0, 00:16:37.744 "data_size": 65536 00:16:37.744 }, 00:16:37.744 { 00:16:37.744 "name": "BaseBdev4", 00:16:37.744 "uuid": "ce3ac071-458e-5f31-abe7-d5c032a55f48", 00:16:37.744 "is_configured": true, 00:16:37.744 "data_offset": 0, 00:16:37.744 "data_size": 65536 00:16:37.744 } 00:16:37.744 ] 00:16:37.744 }' 00:16:37.744 12:08:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:37.744 12:08:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:37.744 12:08:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:37.744 12:08:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:37.744 12:08:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:38.683 12:08:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:38.683 12:08:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:38.683 12:08:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:38.683 12:08:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:38.683 12:08:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:38.683 12:08:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:38.683 12:08:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.683 12:08:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.683 12:08:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.683 12:08:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.683 12:08:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.683 12:08:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:38.683 "name": "raid_bdev1", 00:16:38.683 "uuid": "0173a964-b7a5-40af-b38b-9b03f8ac950d", 00:16:38.683 "strip_size_kb": 64, 00:16:38.683 "state": "online", 00:16:38.683 "raid_level": "raid5f", 00:16:38.683 "superblock": false, 00:16:38.683 "num_base_bdevs": 4, 00:16:38.683 "num_base_bdevs_discovered": 4, 00:16:38.683 "num_base_bdevs_operational": 4, 00:16:38.683 "process": { 00:16:38.683 "type": "rebuild", 00:16:38.684 "target": "spare", 00:16:38.684 "progress": { 00:16:38.684 "blocks": 130560, 00:16:38.684 "percent": 66 00:16:38.684 } 00:16:38.684 }, 00:16:38.684 "base_bdevs_list": [ 00:16:38.684 { 00:16:38.684 "name": "spare", 00:16:38.684 "uuid": "49c84adf-a1a2-521c-9224-255598ea9f6f", 00:16:38.684 "is_configured": true, 00:16:38.684 "data_offset": 0, 00:16:38.684 "data_size": 65536 00:16:38.684 }, 00:16:38.684 { 00:16:38.684 "name": "BaseBdev2", 00:16:38.684 "uuid": "3e001658-c42f-5031-8ab1-5e952d861642", 00:16:38.684 "is_configured": true, 00:16:38.684 "data_offset": 0, 00:16:38.684 "data_size": 65536 00:16:38.684 }, 00:16:38.684 { 00:16:38.684 "name": "BaseBdev3", 00:16:38.684 "uuid": "edc6c0d6-1481-59e8-bde4-a69d00b60f62", 00:16:38.684 "is_configured": true, 00:16:38.684 "data_offset": 0, 00:16:38.684 "data_size": 65536 00:16:38.684 }, 00:16:38.684 { 00:16:38.684 "name": "BaseBdev4", 00:16:38.684 "uuid": "ce3ac071-458e-5f31-abe7-d5c032a55f48", 00:16:38.684 "is_configured": true, 00:16:38.684 "data_offset": 0, 00:16:38.684 "data_size": 65536 00:16:38.684 } 00:16:38.684 ] 00:16:38.684 }' 00:16:38.684 12:08:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:38.684 12:08:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:38.684 12:08:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:38.684 12:08:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:38.684 12:08:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:39.625 12:08:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:39.625 12:08:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:39.625 12:08:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:39.625 12:08:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:39.625 12:08:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:39.625 12:08:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:39.625 12:08:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.625 12:08:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.625 12:08:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.625 12:08:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.884 12:08:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.884 12:08:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:39.884 "name": "raid_bdev1", 00:16:39.884 "uuid": "0173a964-b7a5-40af-b38b-9b03f8ac950d", 00:16:39.884 "strip_size_kb": 64, 00:16:39.884 "state": "online", 00:16:39.884 "raid_level": "raid5f", 00:16:39.884 "superblock": false, 00:16:39.884 "num_base_bdevs": 4, 00:16:39.884 "num_base_bdevs_discovered": 4, 00:16:39.884 "num_base_bdevs_operational": 4, 00:16:39.884 "process": { 00:16:39.884 "type": "rebuild", 00:16:39.884 "target": "spare", 00:16:39.884 "progress": { 00:16:39.884 "blocks": 153600, 00:16:39.884 "percent": 78 00:16:39.884 } 00:16:39.884 }, 00:16:39.884 "base_bdevs_list": [ 00:16:39.884 { 00:16:39.884 "name": "spare", 00:16:39.884 "uuid": "49c84adf-a1a2-521c-9224-255598ea9f6f", 00:16:39.884 "is_configured": true, 00:16:39.884 "data_offset": 0, 00:16:39.884 "data_size": 65536 00:16:39.884 }, 00:16:39.884 { 00:16:39.884 "name": "BaseBdev2", 00:16:39.884 "uuid": "3e001658-c42f-5031-8ab1-5e952d861642", 00:16:39.884 "is_configured": true, 00:16:39.884 "data_offset": 0, 00:16:39.884 "data_size": 65536 00:16:39.884 }, 00:16:39.884 { 00:16:39.884 "name": "BaseBdev3", 00:16:39.884 "uuid": "edc6c0d6-1481-59e8-bde4-a69d00b60f62", 00:16:39.884 "is_configured": true, 00:16:39.884 "data_offset": 0, 00:16:39.884 "data_size": 65536 00:16:39.884 }, 00:16:39.884 { 00:16:39.884 "name": "BaseBdev4", 00:16:39.884 "uuid": "ce3ac071-458e-5f31-abe7-d5c032a55f48", 00:16:39.884 "is_configured": true, 00:16:39.884 "data_offset": 0, 00:16:39.884 "data_size": 65536 00:16:39.884 } 00:16:39.884 ] 00:16:39.884 }' 00:16:39.884 12:08:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:39.884 12:08:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:39.884 12:08:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:39.884 12:08:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:39.884 12:08:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:40.823 12:08:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:40.823 12:08:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:40.823 12:08:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:40.823 12:08:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:40.823 12:08:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:40.823 12:08:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:40.823 12:08:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.823 12:08:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.823 12:08:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.823 12:08:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.823 12:08:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.823 12:08:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:40.823 "name": "raid_bdev1", 00:16:40.823 "uuid": "0173a964-b7a5-40af-b38b-9b03f8ac950d", 00:16:40.823 "strip_size_kb": 64, 00:16:40.823 "state": "online", 00:16:40.823 "raid_level": "raid5f", 00:16:40.823 "superblock": false, 00:16:40.823 "num_base_bdevs": 4, 00:16:40.823 "num_base_bdevs_discovered": 4, 00:16:40.823 "num_base_bdevs_operational": 4, 00:16:40.823 "process": { 00:16:40.823 "type": "rebuild", 00:16:40.823 "target": "spare", 00:16:40.823 "progress": { 00:16:40.823 "blocks": 174720, 00:16:40.823 "percent": 88 00:16:40.823 } 00:16:40.823 }, 00:16:40.823 "base_bdevs_list": [ 00:16:40.823 { 00:16:40.823 "name": "spare", 00:16:40.823 "uuid": "49c84adf-a1a2-521c-9224-255598ea9f6f", 00:16:40.823 "is_configured": true, 00:16:40.823 "data_offset": 0, 00:16:40.823 "data_size": 65536 00:16:40.823 }, 00:16:40.823 { 00:16:40.823 "name": "BaseBdev2", 00:16:40.823 "uuid": "3e001658-c42f-5031-8ab1-5e952d861642", 00:16:40.823 "is_configured": true, 00:16:40.823 "data_offset": 0, 00:16:40.823 "data_size": 65536 00:16:40.823 }, 00:16:40.823 { 00:16:40.823 "name": "BaseBdev3", 00:16:40.823 "uuid": "edc6c0d6-1481-59e8-bde4-a69d00b60f62", 00:16:40.823 "is_configured": true, 00:16:40.823 "data_offset": 0, 00:16:40.823 "data_size": 65536 00:16:40.823 }, 00:16:40.823 { 00:16:40.823 "name": "BaseBdev4", 00:16:40.823 "uuid": "ce3ac071-458e-5f31-abe7-d5c032a55f48", 00:16:40.823 "is_configured": true, 00:16:40.823 "data_offset": 0, 00:16:40.823 "data_size": 65536 00:16:40.823 } 00:16:40.823 ] 00:16:40.823 }' 00:16:40.823 12:08:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:41.083 12:08:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:41.083 12:08:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:41.083 12:08:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:41.083 12:08:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:42.020 12:08:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:42.020 12:08:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:42.020 12:08:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:42.020 12:08:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:42.020 12:08:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:42.020 12:08:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:42.020 12:08:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.020 12:08:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.020 12:08:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.020 12:08:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.020 [2024-11-19 12:08:45.297937] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:42.020 [2024-11-19 12:08:45.298021] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:42.020 [2024-11-19 12:08:45.298076] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:42.020 12:08:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.020 12:08:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:42.020 "name": "raid_bdev1", 00:16:42.020 "uuid": "0173a964-b7a5-40af-b38b-9b03f8ac950d", 00:16:42.020 "strip_size_kb": 64, 00:16:42.020 "state": "online", 00:16:42.020 "raid_level": "raid5f", 00:16:42.020 "superblock": false, 00:16:42.020 "num_base_bdevs": 4, 00:16:42.020 "num_base_bdevs_discovered": 4, 00:16:42.020 "num_base_bdevs_operational": 4, 00:16:42.020 "process": { 00:16:42.020 "type": "rebuild", 00:16:42.020 "target": "spare", 00:16:42.020 "progress": { 00:16:42.020 "blocks": 195840, 00:16:42.020 "percent": 99 00:16:42.020 } 00:16:42.020 }, 00:16:42.020 "base_bdevs_list": [ 00:16:42.020 { 00:16:42.020 "name": "spare", 00:16:42.020 "uuid": "49c84adf-a1a2-521c-9224-255598ea9f6f", 00:16:42.020 "is_configured": true, 00:16:42.020 "data_offset": 0, 00:16:42.020 "data_size": 65536 00:16:42.020 }, 00:16:42.020 { 00:16:42.020 "name": "BaseBdev2", 00:16:42.020 "uuid": "3e001658-c42f-5031-8ab1-5e952d861642", 00:16:42.020 "is_configured": true, 00:16:42.020 "data_offset": 0, 00:16:42.020 "data_size": 65536 00:16:42.020 }, 00:16:42.020 { 00:16:42.020 "name": "BaseBdev3", 00:16:42.020 "uuid": "edc6c0d6-1481-59e8-bde4-a69d00b60f62", 00:16:42.020 "is_configured": true, 00:16:42.020 "data_offset": 0, 00:16:42.020 "data_size": 65536 00:16:42.020 }, 00:16:42.020 { 00:16:42.020 "name": "BaseBdev4", 00:16:42.020 "uuid": "ce3ac071-458e-5f31-abe7-d5c032a55f48", 00:16:42.020 "is_configured": true, 00:16:42.020 "data_offset": 0, 00:16:42.020 "data_size": 65536 00:16:42.020 } 00:16:42.020 ] 00:16:42.020 }' 00:16:42.020 12:08:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:42.020 12:08:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:42.020 12:08:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:42.278 12:08:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:42.278 12:08:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:43.214 12:08:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:43.214 12:08:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:43.214 12:08:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:43.214 12:08:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:43.214 12:08:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:43.214 12:08:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:43.214 12:08:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.214 12:08:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.214 12:08:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.214 12:08:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.214 12:08:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.214 12:08:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:43.214 "name": "raid_bdev1", 00:16:43.214 "uuid": "0173a964-b7a5-40af-b38b-9b03f8ac950d", 00:16:43.214 "strip_size_kb": 64, 00:16:43.214 "state": "online", 00:16:43.214 "raid_level": "raid5f", 00:16:43.214 "superblock": false, 00:16:43.214 "num_base_bdevs": 4, 00:16:43.214 "num_base_bdevs_discovered": 4, 00:16:43.214 "num_base_bdevs_operational": 4, 00:16:43.214 "base_bdevs_list": [ 00:16:43.214 { 00:16:43.214 "name": "spare", 00:16:43.214 "uuid": "49c84adf-a1a2-521c-9224-255598ea9f6f", 00:16:43.214 "is_configured": true, 00:16:43.214 "data_offset": 0, 00:16:43.214 "data_size": 65536 00:16:43.214 }, 00:16:43.214 { 00:16:43.214 "name": "BaseBdev2", 00:16:43.214 "uuid": "3e001658-c42f-5031-8ab1-5e952d861642", 00:16:43.215 "is_configured": true, 00:16:43.215 "data_offset": 0, 00:16:43.215 "data_size": 65536 00:16:43.215 }, 00:16:43.215 { 00:16:43.215 "name": "BaseBdev3", 00:16:43.215 "uuid": "edc6c0d6-1481-59e8-bde4-a69d00b60f62", 00:16:43.215 "is_configured": true, 00:16:43.215 "data_offset": 0, 00:16:43.215 "data_size": 65536 00:16:43.215 }, 00:16:43.215 { 00:16:43.215 "name": "BaseBdev4", 00:16:43.215 "uuid": "ce3ac071-458e-5f31-abe7-d5c032a55f48", 00:16:43.215 "is_configured": true, 00:16:43.215 "data_offset": 0, 00:16:43.215 "data_size": 65536 00:16:43.215 } 00:16:43.215 ] 00:16:43.215 }' 00:16:43.215 12:08:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:43.215 12:08:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:43.215 12:08:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:43.474 12:08:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:43.474 12:08:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:16:43.474 12:08:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:43.474 12:08:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:43.474 12:08:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:43.474 12:08:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:43.474 12:08:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:43.474 12:08:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.474 12:08:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.474 12:08:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.474 12:08:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.474 12:08:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.474 12:08:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:43.474 "name": "raid_bdev1", 00:16:43.474 "uuid": "0173a964-b7a5-40af-b38b-9b03f8ac950d", 00:16:43.474 "strip_size_kb": 64, 00:16:43.474 "state": "online", 00:16:43.474 "raid_level": "raid5f", 00:16:43.474 "superblock": false, 00:16:43.474 "num_base_bdevs": 4, 00:16:43.474 "num_base_bdevs_discovered": 4, 00:16:43.474 "num_base_bdevs_operational": 4, 00:16:43.474 "base_bdevs_list": [ 00:16:43.474 { 00:16:43.474 "name": "spare", 00:16:43.474 "uuid": "49c84adf-a1a2-521c-9224-255598ea9f6f", 00:16:43.474 "is_configured": true, 00:16:43.474 "data_offset": 0, 00:16:43.474 "data_size": 65536 00:16:43.474 }, 00:16:43.474 { 00:16:43.474 "name": "BaseBdev2", 00:16:43.474 "uuid": "3e001658-c42f-5031-8ab1-5e952d861642", 00:16:43.474 "is_configured": true, 00:16:43.474 "data_offset": 0, 00:16:43.474 "data_size": 65536 00:16:43.474 }, 00:16:43.474 { 00:16:43.474 "name": "BaseBdev3", 00:16:43.474 "uuid": "edc6c0d6-1481-59e8-bde4-a69d00b60f62", 00:16:43.474 "is_configured": true, 00:16:43.474 "data_offset": 0, 00:16:43.474 "data_size": 65536 00:16:43.474 }, 00:16:43.474 { 00:16:43.474 "name": "BaseBdev4", 00:16:43.474 "uuid": "ce3ac071-458e-5f31-abe7-d5c032a55f48", 00:16:43.474 "is_configured": true, 00:16:43.474 "data_offset": 0, 00:16:43.474 "data_size": 65536 00:16:43.474 } 00:16:43.474 ] 00:16:43.474 }' 00:16:43.474 12:08:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:43.474 12:08:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:43.474 12:08:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:43.474 12:08:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:43.474 12:08:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:43.474 12:08:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:43.474 12:08:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:43.474 12:08:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:43.474 12:08:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:43.474 12:08:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:43.475 12:08:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.475 12:08:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.475 12:08:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.475 12:08:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.475 12:08:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.475 12:08:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.475 12:08:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.475 12:08:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.475 12:08:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.475 12:08:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.475 "name": "raid_bdev1", 00:16:43.475 "uuid": "0173a964-b7a5-40af-b38b-9b03f8ac950d", 00:16:43.475 "strip_size_kb": 64, 00:16:43.475 "state": "online", 00:16:43.475 "raid_level": "raid5f", 00:16:43.475 "superblock": false, 00:16:43.475 "num_base_bdevs": 4, 00:16:43.475 "num_base_bdevs_discovered": 4, 00:16:43.475 "num_base_bdevs_operational": 4, 00:16:43.475 "base_bdevs_list": [ 00:16:43.475 { 00:16:43.475 "name": "spare", 00:16:43.475 "uuid": "49c84adf-a1a2-521c-9224-255598ea9f6f", 00:16:43.475 "is_configured": true, 00:16:43.475 "data_offset": 0, 00:16:43.475 "data_size": 65536 00:16:43.475 }, 00:16:43.475 { 00:16:43.475 "name": "BaseBdev2", 00:16:43.475 "uuid": "3e001658-c42f-5031-8ab1-5e952d861642", 00:16:43.475 "is_configured": true, 00:16:43.475 "data_offset": 0, 00:16:43.475 "data_size": 65536 00:16:43.475 }, 00:16:43.475 { 00:16:43.475 "name": "BaseBdev3", 00:16:43.475 "uuid": "edc6c0d6-1481-59e8-bde4-a69d00b60f62", 00:16:43.475 "is_configured": true, 00:16:43.475 "data_offset": 0, 00:16:43.475 "data_size": 65536 00:16:43.475 }, 00:16:43.475 { 00:16:43.475 "name": "BaseBdev4", 00:16:43.475 "uuid": "ce3ac071-458e-5f31-abe7-d5c032a55f48", 00:16:43.475 "is_configured": true, 00:16:43.475 "data_offset": 0, 00:16:43.475 "data_size": 65536 00:16:43.475 } 00:16:43.475 ] 00:16:43.475 }' 00:16:43.475 12:08:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.475 12:08:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.044 12:08:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:44.044 12:08:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.044 12:08:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.044 [2024-11-19 12:08:47.185695] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:44.044 [2024-11-19 12:08:47.185729] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:44.044 [2024-11-19 12:08:47.185813] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:44.044 [2024-11-19 12:08:47.185906] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:44.044 [2024-11-19 12:08:47.185918] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:44.044 12:08:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.044 12:08:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.044 12:08:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:16:44.044 12:08:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.044 12:08:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.044 12:08:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.044 12:08:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:44.044 12:08:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:44.044 12:08:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:44.044 12:08:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:44.044 12:08:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:44.044 12:08:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:44.044 12:08:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:44.044 12:08:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:44.044 12:08:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:44.044 12:08:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:44.044 12:08:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:44.044 12:08:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:44.044 12:08:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:44.304 /dev/nbd0 00:16:44.304 12:08:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:44.304 12:08:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:44.304 12:08:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:44.304 12:08:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:44.304 12:08:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:44.304 12:08:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:44.304 12:08:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:44.304 12:08:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:44.304 12:08:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:44.304 12:08:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:44.304 12:08:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:44.304 1+0 records in 00:16:44.304 1+0 records out 00:16:44.304 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000358339 s, 11.4 MB/s 00:16:44.304 12:08:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:44.304 12:08:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:44.304 12:08:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:44.304 12:08:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:44.304 12:08:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:44.304 12:08:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:44.304 12:08:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:44.304 12:08:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:44.563 /dev/nbd1 00:16:44.563 12:08:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:44.563 12:08:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:44.563 12:08:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:44.563 12:08:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:44.563 12:08:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:44.563 12:08:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:44.563 12:08:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:44.563 12:08:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:44.563 12:08:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:44.563 12:08:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:44.563 12:08:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:44.563 1+0 records in 00:16:44.563 1+0 records out 00:16:44.563 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000391079 s, 10.5 MB/s 00:16:44.563 12:08:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:44.563 12:08:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:44.563 12:08:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:44.563 12:08:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:44.563 12:08:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:44.563 12:08:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:44.563 12:08:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:44.563 12:08:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:44.563 12:08:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:44.563 12:08:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:44.563 12:08:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:44.563 12:08:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:44.563 12:08:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:44.563 12:08:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:44.563 12:08:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:44.822 12:08:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:44.822 12:08:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:44.822 12:08:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:44.822 12:08:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:44.822 12:08:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:44.822 12:08:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:44.822 12:08:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:44.822 12:08:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:44.822 12:08:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:44.822 12:08:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:45.082 12:08:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:45.082 12:08:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:45.082 12:08:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:45.082 12:08:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:45.082 12:08:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:45.082 12:08:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:45.082 12:08:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:45.082 12:08:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:45.082 12:08:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:45.082 12:08:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 84522 00:16:45.082 12:08:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 84522 ']' 00:16:45.082 12:08:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 84522 00:16:45.082 12:08:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:16:45.082 12:08:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:45.082 12:08:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84522 00:16:45.082 12:08:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:45.082 12:08:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:45.082 killing process with pid 84522 00:16:45.082 12:08:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84522' 00:16:45.082 12:08:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 84522 00:16:45.082 Received shutdown signal, test time was about 60.000000 seconds 00:16:45.082 00:16:45.082 Latency(us) 00:16:45.082 [2024-11-19T12:08:48.459Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:45.082 [2024-11-19T12:08:48.459Z] =================================================================================================================== 00:16:45.082 [2024-11-19T12:08:48.459Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:45.082 [2024-11-19 12:08:48.359894] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:45.082 12:08:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 84522 00:16:45.651 [2024-11-19 12:08:48.824570] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:46.656 12:08:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:16:46.656 00:16:46.656 real 0m20.003s 00:16:46.656 user 0m23.845s 00:16:46.656 sys 0m2.384s 00:16:46.656 12:08:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:46.656 12:08:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.656 ************************************ 00:16:46.656 END TEST raid5f_rebuild_test 00:16:46.656 ************************************ 00:16:46.656 12:08:49 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:16:46.656 12:08:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:46.656 12:08:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:46.656 12:08:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:46.656 ************************************ 00:16:46.656 START TEST raid5f_rebuild_test_sb 00:16:46.656 ************************************ 00:16:46.656 12:08:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:16:46.656 12:08:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:46.656 12:08:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:46.656 12:08:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:46.656 12:08:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:46.656 12:08:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:46.656 12:08:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:46.656 12:08:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:46.657 12:08:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:46.657 12:08:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:46.657 12:08:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:46.657 12:08:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:46.657 12:08:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:46.657 12:08:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:46.657 12:08:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:46.657 12:08:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:46.657 12:08:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:46.657 12:08:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:46.657 12:08:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:46.657 12:08:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:46.657 12:08:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:46.657 12:08:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:46.657 12:08:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:46.657 12:08:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:46.657 12:08:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:46.657 12:08:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:46.657 12:08:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:46.657 12:08:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:46.657 12:08:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:46.657 12:08:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:46.657 12:08:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:46.657 12:08:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:46.657 12:08:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:46.657 12:08:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=85046 00:16:46.657 12:08:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 85046 00:16:46.657 12:08:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:46.657 12:08:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 85046 ']' 00:16:46.657 12:08:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:46.657 12:08:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:46.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:46.657 12:08:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:46.657 12:08:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:46.657 12:08:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.946 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:46.946 Zero copy mechanism will not be used. 00:16:46.946 [2024-11-19 12:08:50.018706] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:16:46.946 [2024-11-19 12:08:50.018837] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85046 ] 00:16:46.946 [2024-11-19 12:08:50.189024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:46.946 [2024-11-19 12:08:50.297385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:47.206 [2024-11-19 12:08:50.479536] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:47.206 [2024-11-19 12:08:50.479575] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:47.465 12:08:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:47.465 12:08:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:47.465 12:08:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:47.465 12:08:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:47.465 12:08:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.465 12:08:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.725 BaseBdev1_malloc 00:16:47.725 12:08:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.725 12:08:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:47.725 12:08:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.725 12:08:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.725 [2024-11-19 12:08:50.870531] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:47.725 [2024-11-19 12:08:50.870598] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:47.725 [2024-11-19 12:08:50.870623] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:47.725 [2024-11-19 12:08:50.870635] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:47.725 [2024-11-19 12:08:50.872626] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:47.725 [2024-11-19 12:08:50.872667] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:47.725 BaseBdev1 00:16:47.725 12:08:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.725 12:08:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:47.725 12:08:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:47.725 12:08:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.725 12:08:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.725 BaseBdev2_malloc 00:16:47.725 12:08:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.725 12:08:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:47.725 12:08:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.725 12:08:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.725 [2024-11-19 12:08:50.919773] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:47.725 [2024-11-19 12:08:50.919827] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:47.725 [2024-11-19 12:08:50.919846] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:47.725 [2024-11-19 12:08:50.919858] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:47.725 [2024-11-19 12:08:50.921830] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:47.725 [2024-11-19 12:08:50.921863] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:47.725 BaseBdev2 00:16:47.725 12:08:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.725 12:08:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:47.725 12:08:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:47.725 12:08:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.725 12:08:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.725 BaseBdev3_malloc 00:16:47.725 12:08:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.725 12:08:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:47.725 12:08:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.725 12:08:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.725 [2024-11-19 12:08:50.990711] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:47.725 [2024-11-19 12:08:50.990763] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:47.725 [2024-11-19 12:08:50.990784] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:47.725 [2024-11-19 12:08:50.990795] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:47.725 [2024-11-19 12:08:50.992846] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:47.725 [2024-11-19 12:08:50.992953] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:47.725 BaseBdev3 00:16:47.725 12:08:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.725 12:08:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:47.725 12:08:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:47.725 12:08:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.725 12:08:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.725 BaseBdev4_malloc 00:16:47.725 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.725 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:47.726 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.726 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.726 [2024-11-19 12:08:51.045025] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:47.726 [2024-11-19 12:08:51.045073] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:47.726 [2024-11-19 12:08:51.045090] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:47.726 [2024-11-19 12:08:51.045101] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:47.726 [2024-11-19 12:08:51.047241] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:47.726 [2024-11-19 12:08:51.047281] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:47.726 BaseBdev4 00:16:47.726 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.726 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:47.726 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.726 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.726 spare_malloc 00:16:47.726 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.726 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:47.726 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.726 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.986 spare_delay 00:16:47.986 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.986 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:47.986 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.986 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.986 [2024-11-19 12:08:51.110816] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:47.986 [2024-11-19 12:08:51.110869] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:47.986 [2024-11-19 12:08:51.110888] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:47.986 [2024-11-19 12:08:51.110899] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:47.986 [2024-11-19 12:08:51.112987] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:47.986 [2024-11-19 12:08:51.113036] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:47.986 spare 00:16:47.986 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.986 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:47.986 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.986 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.986 [2024-11-19 12:08:51.122844] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:47.986 [2024-11-19 12:08:51.124587] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:47.986 [2024-11-19 12:08:51.124649] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:47.986 [2024-11-19 12:08:51.124697] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:47.986 [2024-11-19 12:08:51.124892] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:47.986 [2024-11-19 12:08:51.124907] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:47.986 [2024-11-19 12:08:51.125124] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:47.986 [2024-11-19 12:08:51.132181] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:47.986 [2024-11-19 12:08:51.132249] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:47.986 [2024-11-19 12:08:51.132454] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:47.986 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.986 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:47.986 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:47.986 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:47.986 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:47.986 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:47.986 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:47.986 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:47.986 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:47.986 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:47.986 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:47.986 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.986 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.986 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.986 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.986 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.986 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:47.986 "name": "raid_bdev1", 00:16:47.986 "uuid": "95e40f4b-7d77-4a34-9552-040bd5ee5810", 00:16:47.986 "strip_size_kb": 64, 00:16:47.986 "state": "online", 00:16:47.986 "raid_level": "raid5f", 00:16:47.986 "superblock": true, 00:16:47.986 "num_base_bdevs": 4, 00:16:47.986 "num_base_bdevs_discovered": 4, 00:16:47.986 "num_base_bdevs_operational": 4, 00:16:47.986 "base_bdevs_list": [ 00:16:47.986 { 00:16:47.986 "name": "BaseBdev1", 00:16:47.986 "uuid": "f1540b5f-282e-5655-be11-8856bfb950c0", 00:16:47.986 "is_configured": true, 00:16:47.986 "data_offset": 2048, 00:16:47.986 "data_size": 63488 00:16:47.986 }, 00:16:47.986 { 00:16:47.986 "name": "BaseBdev2", 00:16:47.986 "uuid": "d56e7f4f-352b-58b3-968c-4636f29ae62a", 00:16:47.986 "is_configured": true, 00:16:47.986 "data_offset": 2048, 00:16:47.986 "data_size": 63488 00:16:47.986 }, 00:16:47.986 { 00:16:47.986 "name": "BaseBdev3", 00:16:47.986 "uuid": "e2d23f75-cc43-563f-a5f3-5d5b337215ff", 00:16:47.986 "is_configured": true, 00:16:47.986 "data_offset": 2048, 00:16:47.986 "data_size": 63488 00:16:47.986 }, 00:16:47.986 { 00:16:47.986 "name": "BaseBdev4", 00:16:47.986 "uuid": "9a4e42ce-5cd6-553a-b544-c6520bda7226", 00:16:47.986 "is_configured": true, 00:16:47.986 "data_offset": 2048, 00:16:47.986 "data_size": 63488 00:16:47.986 } 00:16:47.986 ] 00:16:47.986 }' 00:16:47.986 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:47.986 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.246 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:48.246 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:48.246 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.246 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.246 [2024-11-19 12:08:51.564111] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:48.246 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.246 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:16:48.246 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.246 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:48.246 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.246 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.246 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.505 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:48.505 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:48.505 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:48.505 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:48.505 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:48.506 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:48.506 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:48.506 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:48.506 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:48.506 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:48.506 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:48.506 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:48.506 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:48.506 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:48.506 [2024-11-19 12:08:51.819528] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:48.506 /dev/nbd0 00:16:48.506 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:48.766 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:48.766 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:48.766 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:48.766 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:48.766 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:48.766 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:48.766 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:48.766 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:48.766 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:48.766 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:48.766 1+0 records in 00:16:48.766 1+0 records out 00:16:48.766 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000590326 s, 6.9 MB/s 00:16:48.766 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:48.766 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:48.766 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:48.766 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:48.766 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:48.766 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:48.766 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:48.766 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:48.766 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:16:48.766 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:16:48.766 12:08:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:16:49.025 496+0 records in 00:16:49.025 496+0 records out 00:16:49.025 97517568 bytes (98 MB, 93 MiB) copied, 0.45297 s, 215 MB/s 00:16:49.025 12:08:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:49.025 12:08:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:49.025 12:08:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:49.025 12:08:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:49.025 12:08:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:49.025 12:08:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:49.025 12:08:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:49.285 [2024-11-19 12:08:52.570131] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:49.285 12:08:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:49.285 12:08:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:49.285 12:08:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:49.285 12:08:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:49.285 12:08:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:49.285 12:08:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:49.285 12:08:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:49.285 12:08:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:49.285 12:08:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:49.285 12:08:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.285 12:08:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.285 [2024-11-19 12:08:52.600156] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:49.285 12:08:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.285 12:08:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:49.285 12:08:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:49.285 12:08:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:49.285 12:08:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:49.285 12:08:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:49.285 12:08:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:49.285 12:08:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:49.285 12:08:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:49.285 12:08:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:49.285 12:08:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:49.285 12:08:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.285 12:08:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.285 12:08:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.285 12:08:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.285 12:08:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.544 12:08:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:49.545 "name": "raid_bdev1", 00:16:49.545 "uuid": "95e40f4b-7d77-4a34-9552-040bd5ee5810", 00:16:49.545 "strip_size_kb": 64, 00:16:49.545 "state": "online", 00:16:49.545 "raid_level": "raid5f", 00:16:49.545 "superblock": true, 00:16:49.545 "num_base_bdevs": 4, 00:16:49.545 "num_base_bdevs_discovered": 3, 00:16:49.545 "num_base_bdevs_operational": 3, 00:16:49.545 "base_bdevs_list": [ 00:16:49.545 { 00:16:49.545 "name": null, 00:16:49.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.545 "is_configured": false, 00:16:49.545 "data_offset": 0, 00:16:49.545 "data_size": 63488 00:16:49.545 }, 00:16:49.545 { 00:16:49.545 "name": "BaseBdev2", 00:16:49.545 "uuid": "d56e7f4f-352b-58b3-968c-4636f29ae62a", 00:16:49.545 "is_configured": true, 00:16:49.545 "data_offset": 2048, 00:16:49.545 "data_size": 63488 00:16:49.545 }, 00:16:49.545 { 00:16:49.545 "name": "BaseBdev3", 00:16:49.545 "uuid": "e2d23f75-cc43-563f-a5f3-5d5b337215ff", 00:16:49.545 "is_configured": true, 00:16:49.545 "data_offset": 2048, 00:16:49.545 "data_size": 63488 00:16:49.545 }, 00:16:49.545 { 00:16:49.545 "name": "BaseBdev4", 00:16:49.545 "uuid": "9a4e42ce-5cd6-553a-b544-c6520bda7226", 00:16:49.545 "is_configured": true, 00:16:49.545 "data_offset": 2048, 00:16:49.545 "data_size": 63488 00:16:49.545 } 00:16:49.545 ] 00:16:49.545 }' 00:16:49.545 12:08:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:49.545 12:08:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.804 12:08:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:49.804 12:08:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.804 12:08:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.804 [2024-11-19 12:08:53.023439] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:49.804 [2024-11-19 12:08:53.038551] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:16:49.804 12:08:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.804 12:08:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:49.804 [2024-11-19 12:08:53.047478] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:50.743 12:08:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:50.743 12:08:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:50.743 12:08:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:50.743 12:08:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:50.743 12:08:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:50.743 12:08:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.743 12:08:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.743 12:08:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.743 12:08:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.743 12:08:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.743 12:08:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:50.743 "name": "raid_bdev1", 00:16:50.743 "uuid": "95e40f4b-7d77-4a34-9552-040bd5ee5810", 00:16:50.743 "strip_size_kb": 64, 00:16:50.743 "state": "online", 00:16:50.743 "raid_level": "raid5f", 00:16:50.743 "superblock": true, 00:16:50.743 "num_base_bdevs": 4, 00:16:50.743 "num_base_bdevs_discovered": 4, 00:16:50.743 "num_base_bdevs_operational": 4, 00:16:50.743 "process": { 00:16:50.743 "type": "rebuild", 00:16:50.743 "target": "spare", 00:16:50.743 "progress": { 00:16:50.743 "blocks": 19200, 00:16:50.743 "percent": 10 00:16:50.743 } 00:16:50.743 }, 00:16:50.743 "base_bdevs_list": [ 00:16:50.743 { 00:16:50.743 "name": "spare", 00:16:50.743 "uuid": "b02aadc0-b5d8-5c51-9aa2-b2d66daaa5db", 00:16:50.743 "is_configured": true, 00:16:50.743 "data_offset": 2048, 00:16:50.743 "data_size": 63488 00:16:50.743 }, 00:16:50.743 { 00:16:50.743 "name": "BaseBdev2", 00:16:50.743 "uuid": "d56e7f4f-352b-58b3-968c-4636f29ae62a", 00:16:50.743 "is_configured": true, 00:16:50.743 "data_offset": 2048, 00:16:50.743 "data_size": 63488 00:16:50.743 }, 00:16:50.743 { 00:16:50.743 "name": "BaseBdev3", 00:16:50.743 "uuid": "e2d23f75-cc43-563f-a5f3-5d5b337215ff", 00:16:50.743 "is_configured": true, 00:16:50.743 "data_offset": 2048, 00:16:50.743 "data_size": 63488 00:16:50.743 }, 00:16:50.743 { 00:16:50.743 "name": "BaseBdev4", 00:16:50.743 "uuid": "9a4e42ce-5cd6-553a-b544-c6520bda7226", 00:16:50.743 "is_configured": true, 00:16:50.743 "data_offset": 2048, 00:16:50.743 "data_size": 63488 00:16:50.743 } 00:16:50.743 ] 00:16:50.743 }' 00:16:50.743 12:08:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:51.002 12:08:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:51.002 12:08:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:51.002 12:08:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:51.002 12:08:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:51.002 12:08:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.002 12:08:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.002 [2024-11-19 12:08:54.198402] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:51.002 [2024-11-19 12:08:54.253301] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:51.002 [2024-11-19 12:08:54.253363] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:51.002 [2024-11-19 12:08:54.253379] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:51.002 [2024-11-19 12:08:54.253388] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:51.002 12:08:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.002 12:08:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:51.002 12:08:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:51.002 12:08:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:51.002 12:08:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:51.002 12:08:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:51.002 12:08:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:51.002 12:08:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:51.002 12:08:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:51.002 12:08:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:51.002 12:08:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:51.002 12:08:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.002 12:08:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.002 12:08:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.002 12:08:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.002 12:08:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.002 12:08:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:51.002 "name": "raid_bdev1", 00:16:51.002 "uuid": "95e40f4b-7d77-4a34-9552-040bd5ee5810", 00:16:51.002 "strip_size_kb": 64, 00:16:51.002 "state": "online", 00:16:51.002 "raid_level": "raid5f", 00:16:51.002 "superblock": true, 00:16:51.002 "num_base_bdevs": 4, 00:16:51.002 "num_base_bdevs_discovered": 3, 00:16:51.002 "num_base_bdevs_operational": 3, 00:16:51.002 "base_bdevs_list": [ 00:16:51.002 { 00:16:51.002 "name": null, 00:16:51.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.002 "is_configured": false, 00:16:51.002 "data_offset": 0, 00:16:51.002 "data_size": 63488 00:16:51.002 }, 00:16:51.002 { 00:16:51.002 "name": "BaseBdev2", 00:16:51.002 "uuid": "d56e7f4f-352b-58b3-968c-4636f29ae62a", 00:16:51.002 "is_configured": true, 00:16:51.002 "data_offset": 2048, 00:16:51.002 "data_size": 63488 00:16:51.002 }, 00:16:51.002 { 00:16:51.002 "name": "BaseBdev3", 00:16:51.002 "uuid": "e2d23f75-cc43-563f-a5f3-5d5b337215ff", 00:16:51.002 "is_configured": true, 00:16:51.002 "data_offset": 2048, 00:16:51.002 "data_size": 63488 00:16:51.002 }, 00:16:51.002 { 00:16:51.002 "name": "BaseBdev4", 00:16:51.002 "uuid": "9a4e42ce-5cd6-553a-b544-c6520bda7226", 00:16:51.002 "is_configured": true, 00:16:51.002 "data_offset": 2048, 00:16:51.002 "data_size": 63488 00:16:51.002 } 00:16:51.002 ] 00:16:51.002 }' 00:16:51.002 12:08:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:51.002 12:08:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.571 12:08:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:51.571 12:08:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:51.571 12:08:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:51.571 12:08:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:51.571 12:08:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:51.571 12:08:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.571 12:08:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.571 12:08:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.571 12:08:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.571 12:08:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.571 12:08:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:51.571 "name": "raid_bdev1", 00:16:51.571 "uuid": "95e40f4b-7d77-4a34-9552-040bd5ee5810", 00:16:51.571 "strip_size_kb": 64, 00:16:51.571 "state": "online", 00:16:51.571 "raid_level": "raid5f", 00:16:51.571 "superblock": true, 00:16:51.571 "num_base_bdevs": 4, 00:16:51.571 "num_base_bdevs_discovered": 3, 00:16:51.571 "num_base_bdevs_operational": 3, 00:16:51.571 "base_bdevs_list": [ 00:16:51.571 { 00:16:51.571 "name": null, 00:16:51.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.572 "is_configured": false, 00:16:51.572 "data_offset": 0, 00:16:51.572 "data_size": 63488 00:16:51.572 }, 00:16:51.572 { 00:16:51.572 "name": "BaseBdev2", 00:16:51.572 "uuid": "d56e7f4f-352b-58b3-968c-4636f29ae62a", 00:16:51.572 "is_configured": true, 00:16:51.572 "data_offset": 2048, 00:16:51.572 "data_size": 63488 00:16:51.572 }, 00:16:51.572 { 00:16:51.572 "name": "BaseBdev3", 00:16:51.572 "uuid": "e2d23f75-cc43-563f-a5f3-5d5b337215ff", 00:16:51.572 "is_configured": true, 00:16:51.572 "data_offset": 2048, 00:16:51.572 "data_size": 63488 00:16:51.572 }, 00:16:51.572 { 00:16:51.572 "name": "BaseBdev4", 00:16:51.572 "uuid": "9a4e42ce-5cd6-553a-b544-c6520bda7226", 00:16:51.572 "is_configured": true, 00:16:51.572 "data_offset": 2048, 00:16:51.572 "data_size": 63488 00:16:51.572 } 00:16:51.572 ] 00:16:51.572 }' 00:16:51.572 12:08:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:51.572 12:08:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:51.572 12:08:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:51.572 12:08:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:51.572 12:08:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:51.572 12:08:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.572 12:08:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.572 [2024-11-19 12:08:54.821542] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:51.572 [2024-11-19 12:08:54.835741] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:16:51.572 12:08:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.572 12:08:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:51.572 [2024-11-19 12:08:54.844459] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:52.509 12:08:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:52.509 12:08:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:52.509 12:08:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:52.509 12:08:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:52.509 12:08:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:52.509 12:08:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.509 12:08:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.509 12:08:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.509 12:08:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.509 12:08:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.769 12:08:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:52.769 "name": "raid_bdev1", 00:16:52.769 "uuid": "95e40f4b-7d77-4a34-9552-040bd5ee5810", 00:16:52.769 "strip_size_kb": 64, 00:16:52.769 "state": "online", 00:16:52.769 "raid_level": "raid5f", 00:16:52.769 "superblock": true, 00:16:52.769 "num_base_bdevs": 4, 00:16:52.769 "num_base_bdevs_discovered": 4, 00:16:52.769 "num_base_bdevs_operational": 4, 00:16:52.769 "process": { 00:16:52.769 "type": "rebuild", 00:16:52.769 "target": "spare", 00:16:52.769 "progress": { 00:16:52.769 "blocks": 19200, 00:16:52.769 "percent": 10 00:16:52.769 } 00:16:52.769 }, 00:16:52.769 "base_bdevs_list": [ 00:16:52.769 { 00:16:52.769 "name": "spare", 00:16:52.769 "uuid": "b02aadc0-b5d8-5c51-9aa2-b2d66daaa5db", 00:16:52.769 "is_configured": true, 00:16:52.769 "data_offset": 2048, 00:16:52.769 "data_size": 63488 00:16:52.769 }, 00:16:52.769 { 00:16:52.769 "name": "BaseBdev2", 00:16:52.769 "uuid": "d56e7f4f-352b-58b3-968c-4636f29ae62a", 00:16:52.769 "is_configured": true, 00:16:52.769 "data_offset": 2048, 00:16:52.769 "data_size": 63488 00:16:52.769 }, 00:16:52.769 { 00:16:52.769 "name": "BaseBdev3", 00:16:52.769 "uuid": "e2d23f75-cc43-563f-a5f3-5d5b337215ff", 00:16:52.769 "is_configured": true, 00:16:52.769 "data_offset": 2048, 00:16:52.769 "data_size": 63488 00:16:52.769 }, 00:16:52.769 { 00:16:52.769 "name": "BaseBdev4", 00:16:52.769 "uuid": "9a4e42ce-5cd6-553a-b544-c6520bda7226", 00:16:52.769 "is_configured": true, 00:16:52.769 "data_offset": 2048, 00:16:52.769 "data_size": 63488 00:16:52.769 } 00:16:52.769 ] 00:16:52.769 }' 00:16:52.769 12:08:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:52.769 12:08:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:52.769 12:08:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:52.769 12:08:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:52.769 12:08:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:52.769 12:08:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:52.769 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:52.769 12:08:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:52.769 12:08:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:52.769 12:08:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=628 00:16:52.769 12:08:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:52.769 12:08:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:52.769 12:08:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:52.769 12:08:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:52.769 12:08:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:52.769 12:08:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:52.769 12:08:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.769 12:08:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.769 12:08:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.769 12:08:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.769 12:08:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.769 12:08:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:52.769 "name": "raid_bdev1", 00:16:52.769 "uuid": "95e40f4b-7d77-4a34-9552-040bd5ee5810", 00:16:52.769 "strip_size_kb": 64, 00:16:52.769 "state": "online", 00:16:52.770 "raid_level": "raid5f", 00:16:52.770 "superblock": true, 00:16:52.770 "num_base_bdevs": 4, 00:16:52.770 "num_base_bdevs_discovered": 4, 00:16:52.770 "num_base_bdevs_operational": 4, 00:16:52.770 "process": { 00:16:52.770 "type": "rebuild", 00:16:52.770 "target": "spare", 00:16:52.770 "progress": { 00:16:52.770 "blocks": 21120, 00:16:52.770 "percent": 11 00:16:52.770 } 00:16:52.770 }, 00:16:52.770 "base_bdevs_list": [ 00:16:52.770 { 00:16:52.770 "name": "spare", 00:16:52.770 "uuid": "b02aadc0-b5d8-5c51-9aa2-b2d66daaa5db", 00:16:52.770 "is_configured": true, 00:16:52.770 "data_offset": 2048, 00:16:52.770 "data_size": 63488 00:16:52.770 }, 00:16:52.770 { 00:16:52.770 "name": "BaseBdev2", 00:16:52.770 "uuid": "d56e7f4f-352b-58b3-968c-4636f29ae62a", 00:16:52.770 "is_configured": true, 00:16:52.770 "data_offset": 2048, 00:16:52.770 "data_size": 63488 00:16:52.770 }, 00:16:52.770 { 00:16:52.770 "name": "BaseBdev3", 00:16:52.770 "uuid": "e2d23f75-cc43-563f-a5f3-5d5b337215ff", 00:16:52.770 "is_configured": true, 00:16:52.770 "data_offset": 2048, 00:16:52.770 "data_size": 63488 00:16:52.770 }, 00:16:52.770 { 00:16:52.770 "name": "BaseBdev4", 00:16:52.770 "uuid": "9a4e42ce-5cd6-553a-b544-c6520bda7226", 00:16:52.770 "is_configured": true, 00:16:52.770 "data_offset": 2048, 00:16:52.770 "data_size": 63488 00:16:52.770 } 00:16:52.770 ] 00:16:52.770 }' 00:16:52.770 12:08:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:52.770 12:08:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:52.770 12:08:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:52.770 12:08:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:52.770 12:08:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:54.150 12:08:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:54.150 12:08:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:54.150 12:08:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:54.150 12:08:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:54.150 12:08:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:54.150 12:08:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:54.150 12:08:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.150 12:08:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.150 12:08:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.150 12:08:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.150 12:08:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.150 12:08:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:54.150 "name": "raid_bdev1", 00:16:54.150 "uuid": "95e40f4b-7d77-4a34-9552-040bd5ee5810", 00:16:54.150 "strip_size_kb": 64, 00:16:54.150 "state": "online", 00:16:54.150 "raid_level": "raid5f", 00:16:54.150 "superblock": true, 00:16:54.150 "num_base_bdevs": 4, 00:16:54.150 "num_base_bdevs_discovered": 4, 00:16:54.150 "num_base_bdevs_operational": 4, 00:16:54.150 "process": { 00:16:54.150 "type": "rebuild", 00:16:54.150 "target": "spare", 00:16:54.150 "progress": { 00:16:54.150 "blocks": 42240, 00:16:54.150 "percent": 22 00:16:54.150 } 00:16:54.150 }, 00:16:54.150 "base_bdevs_list": [ 00:16:54.150 { 00:16:54.150 "name": "spare", 00:16:54.150 "uuid": "b02aadc0-b5d8-5c51-9aa2-b2d66daaa5db", 00:16:54.150 "is_configured": true, 00:16:54.150 "data_offset": 2048, 00:16:54.150 "data_size": 63488 00:16:54.150 }, 00:16:54.150 { 00:16:54.150 "name": "BaseBdev2", 00:16:54.150 "uuid": "d56e7f4f-352b-58b3-968c-4636f29ae62a", 00:16:54.150 "is_configured": true, 00:16:54.150 "data_offset": 2048, 00:16:54.150 "data_size": 63488 00:16:54.150 }, 00:16:54.150 { 00:16:54.150 "name": "BaseBdev3", 00:16:54.150 "uuid": "e2d23f75-cc43-563f-a5f3-5d5b337215ff", 00:16:54.150 "is_configured": true, 00:16:54.150 "data_offset": 2048, 00:16:54.150 "data_size": 63488 00:16:54.150 }, 00:16:54.150 { 00:16:54.150 "name": "BaseBdev4", 00:16:54.150 "uuid": "9a4e42ce-5cd6-553a-b544-c6520bda7226", 00:16:54.150 "is_configured": true, 00:16:54.150 "data_offset": 2048, 00:16:54.150 "data_size": 63488 00:16:54.150 } 00:16:54.150 ] 00:16:54.150 }' 00:16:54.150 12:08:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:54.150 12:08:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:54.150 12:08:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:54.150 12:08:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:54.150 12:08:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:55.088 12:08:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:55.088 12:08:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:55.089 12:08:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:55.089 12:08:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:55.089 12:08:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:55.089 12:08:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:55.089 12:08:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.089 12:08:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.089 12:08:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.089 12:08:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.089 12:08:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.089 12:08:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:55.089 "name": "raid_bdev1", 00:16:55.089 "uuid": "95e40f4b-7d77-4a34-9552-040bd5ee5810", 00:16:55.089 "strip_size_kb": 64, 00:16:55.089 "state": "online", 00:16:55.089 "raid_level": "raid5f", 00:16:55.089 "superblock": true, 00:16:55.089 "num_base_bdevs": 4, 00:16:55.089 "num_base_bdevs_discovered": 4, 00:16:55.089 "num_base_bdevs_operational": 4, 00:16:55.089 "process": { 00:16:55.089 "type": "rebuild", 00:16:55.089 "target": "spare", 00:16:55.089 "progress": { 00:16:55.089 "blocks": 65280, 00:16:55.089 "percent": 34 00:16:55.089 } 00:16:55.089 }, 00:16:55.089 "base_bdevs_list": [ 00:16:55.089 { 00:16:55.089 "name": "spare", 00:16:55.089 "uuid": "b02aadc0-b5d8-5c51-9aa2-b2d66daaa5db", 00:16:55.089 "is_configured": true, 00:16:55.089 "data_offset": 2048, 00:16:55.089 "data_size": 63488 00:16:55.089 }, 00:16:55.089 { 00:16:55.089 "name": "BaseBdev2", 00:16:55.089 "uuid": "d56e7f4f-352b-58b3-968c-4636f29ae62a", 00:16:55.089 "is_configured": true, 00:16:55.089 "data_offset": 2048, 00:16:55.089 "data_size": 63488 00:16:55.089 }, 00:16:55.089 { 00:16:55.089 "name": "BaseBdev3", 00:16:55.089 "uuid": "e2d23f75-cc43-563f-a5f3-5d5b337215ff", 00:16:55.089 "is_configured": true, 00:16:55.089 "data_offset": 2048, 00:16:55.089 "data_size": 63488 00:16:55.089 }, 00:16:55.089 { 00:16:55.089 "name": "BaseBdev4", 00:16:55.089 "uuid": "9a4e42ce-5cd6-553a-b544-c6520bda7226", 00:16:55.089 "is_configured": true, 00:16:55.089 "data_offset": 2048, 00:16:55.089 "data_size": 63488 00:16:55.089 } 00:16:55.089 ] 00:16:55.089 }' 00:16:55.089 12:08:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:55.089 12:08:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:55.089 12:08:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:55.089 12:08:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:55.089 12:08:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:56.469 12:08:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:56.469 12:08:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:56.469 12:08:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:56.469 12:08:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:56.469 12:08:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:56.469 12:08:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:56.469 12:08:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.469 12:08:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:56.469 12:08:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.469 12:08:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.469 12:08:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.469 12:08:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:56.469 "name": "raid_bdev1", 00:16:56.469 "uuid": "95e40f4b-7d77-4a34-9552-040bd5ee5810", 00:16:56.469 "strip_size_kb": 64, 00:16:56.469 "state": "online", 00:16:56.469 "raid_level": "raid5f", 00:16:56.469 "superblock": true, 00:16:56.469 "num_base_bdevs": 4, 00:16:56.469 "num_base_bdevs_discovered": 4, 00:16:56.469 "num_base_bdevs_operational": 4, 00:16:56.469 "process": { 00:16:56.469 "type": "rebuild", 00:16:56.469 "target": "spare", 00:16:56.469 "progress": { 00:16:56.469 "blocks": 86400, 00:16:56.469 "percent": 45 00:16:56.469 } 00:16:56.469 }, 00:16:56.469 "base_bdevs_list": [ 00:16:56.469 { 00:16:56.469 "name": "spare", 00:16:56.469 "uuid": "b02aadc0-b5d8-5c51-9aa2-b2d66daaa5db", 00:16:56.469 "is_configured": true, 00:16:56.469 "data_offset": 2048, 00:16:56.469 "data_size": 63488 00:16:56.469 }, 00:16:56.469 { 00:16:56.469 "name": "BaseBdev2", 00:16:56.469 "uuid": "d56e7f4f-352b-58b3-968c-4636f29ae62a", 00:16:56.469 "is_configured": true, 00:16:56.469 "data_offset": 2048, 00:16:56.469 "data_size": 63488 00:16:56.469 }, 00:16:56.469 { 00:16:56.469 "name": "BaseBdev3", 00:16:56.469 "uuid": "e2d23f75-cc43-563f-a5f3-5d5b337215ff", 00:16:56.469 "is_configured": true, 00:16:56.469 "data_offset": 2048, 00:16:56.469 "data_size": 63488 00:16:56.469 }, 00:16:56.469 { 00:16:56.469 "name": "BaseBdev4", 00:16:56.469 "uuid": "9a4e42ce-5cd6-553a-b544-c6520bda7226", 00:16:56.469 "is_configured": true, 00:16:56.469 "data_offset": 2048, 00:16:56.469 "data_size": 63488 00:16:56.469 } 00:16:56.469 ] 00:16:56.469 }' 00:16:56.469 12:08:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:56.469 12:08:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:56.469 12:08:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:56.469 12:08:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:56.469 12:08:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:57.406 12:09:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:57.406 12:09:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:57.406 12:09:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:57.406 12:09:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:57.406 12:09:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:57.406 12:09:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:57.406 12:09:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.406 12:09:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.406 12:09:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.406 12:09:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.406 12:09:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.406 12:09:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:57.406 "name": "raid_bdev1", 00:16:57.406 "uuid": "95e40f4b-7d77-4a34-9552-040bd5ee5810", 00:16:57.406 "strip_size_kb": 64, 00:16:57.406 "state": "online", 00:16:57.406 "raid_level": "raid5f", 00:16:57.406 "superblock": true, 00:16:57.406 "num_base_bdevs": 4, 00:16:57.406 "num_base_bdevs_discovered": 4, 00:16:57.406 "num_base_bdevs_operational": 4, 00:16:57.406 "process": { 00:16:57.406 "type": "rebuild", 00:16:57.406 "target": "spare", 00:16:57.407 "progress": { 00:16:57.407 "blocks": 107520, 00:16:57.407 "percent": 56 00:16:57.407 } 00:16:57.407 }, 00:16:57.407 "base_bdevs_list": [ 00:16:57.407 { 00:16:57.407 "name": "spare", 00:16:57.407 "uuid": "b02aadc0-b5d8-5c51-9aa2-b2d66daaa5db", 00:16:57.407 "is_configured": true, 00:16:57.407 "data_offset": 2048, 00:16:57.407 "data_size": 63488 00:16:57.407 }, 00:16:57.407 { 00:16:57.407 "name": "BaseBdev2", 00:16:57.407 "uuid": "d56e7f4f-352b-58b3-968c-4636f29ae62a", 00:16:57.407 "is_configured": true, 00:16:57.407 "data_offset": 2048, 00:16:57.407 "data_size": 63488 00:16:57.407 }, 00:16:57.407 { 00:16:57.407 "name": "BaseBdev3", 00:16:57.407 "uuid": "e2d23f75-cc43-563f-a5f3-5d5b337215ff", 00:16:57.407 "is_configured": true, 00:16:57.407 "data_offset": 2048, 00:16:57.407 "data_size": 63488 00:16:57.407 }, 00:16:57.407 { 00:16:57.407 "name": "BaseBdev4", 00:16:57.407 "uuid": "9a4e42ce-5cd6-553a-b544-c6520bda7226", 00:16:57.407 "is_configured": true, 00:16:57.407 "data_offset": 2048, 00:16:57.407 "data_size": 63488 00:16:57.407 } 00:16:57.407 ] 00:16:57.407 }' 00:16:57.407 12:09:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:57.407 12:09:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:57.407 12:09:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:57.407 12:09:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:57.407 12:09:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:58.344 12:09:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:58.344 12:09:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:58.344 12:09:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:58.344 12:09:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:58.344 12:09:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:58.344 12:09:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:58.344 12:09:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.344 12:09:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.344 12:09:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.344 12:09:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.344 12:09:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.603 12:09:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:58.603 "name": "raid_bdev1", 00:16:58.603 "uuid": "95e40f4b-7d77-4a34-9552-040bd5ee5810", 00:16:58.603 "strip_size_kb": 64, 00:16:58.603 "state": "online", 00:16:58.603 "raid_level": "raid5f", 00:16:58.603 "superblock": true, 00:16:58.603 "num_base_bdevs": 4, 00:16:58.603 "num_base_bdevs_discovered": 4, 00:16:58.603 "num_base_bdevs_operational": 4, 00:16:58.603 "process": { 00:16:58.603 "type": "rebuild", 00:16:58.603 "target": "spare", 00:16:58.603 "progress": { 00:16:58.603 "blocks": 130560, 00:16:58.603 "percent": 68 00:16:58.603 } 00:16:58.603 }, 00:16:58.603 "base_bdevs_list": [ 00:16:58.603 { 00:16:58.603 "name": "spare", 00:16:58.603 "uuid": "b02aadc0-b5d8-5c51-9aa2-b2d66daaa5db", 00:16:58.603 "is_configured": true, 00:16:58.603 "data_offset": 2048, 00:16:58.603 "data_size": 63488 00:16:58.603 }, 00:16:58.603 { 00:16:58.603 "name": "BaseBdev2", 00:16:58.603 "uuid": "d56e7f4f-352b-58b3-968c-4636f29ae62a", 00:16:58.603 "is_configured": true, 00:16:58.603 "data_offset": 2048, 00:16:58.603 "data_size": 63488 00:16:58.603 }, 00:16:58.603 { 00:16:58.603 "name": "BaseBdev3", 00:16:58.603 "uuid": "e2d23f75-cc43-563f-a5f3-5d5b337215ff", 00:16:58.603 "is_configured": true, 00:16:58.603 "data_offset": 2048, 00:16:58.603 "data_size": 63488 00:16:58.603 }, 00:16:58.603 { 00:16:58.603 "name": "BaseBdev4", 00:16:58.603 "uuid": "9a4e42ce-5cd6-553a-b544-c6520bda7226", 00:16:58.603 "is_configured": true, 00:16:58.603 "data_offset": 2048, 00:16:58.603 "data_size": 63488 00:16:58.603 } 00:16:58.603 ] 00:16:58.603 }' 00:16:58.603 12:09:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:58.603 12:09:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:58.603 12:09:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:58.603 12:09:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:58.603 12:09:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:59.542 12:09:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:59.542 12:09:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:59.542 12:09:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:59.542 12:09:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:59.542 12:09:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:59.542 12:09:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:59.542 12:09:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.542 12:09:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.542 12:09:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.542 12:09:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.542 12:09:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.542 12:09:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:59.542 "name": "raid_bdev1", 00:16:59.542 "uuid": "95e40f4b-7d77-4a34-9552-040bd5ee5810", 00:16:59.542 "strip_size_kb": 64, 00:16:59.542 "state": "online", 00:16:59.542 "raid_level": "raid5f", 00:16:59.542 "superblock": true, 00:16:59.542 "num_base_bdevs": 4, 00:16:59.542 "num_base_bdevs_discovered": 4, 00:16:59.542 "num_base_bdevs_operational": 4, 00:16:59.542 "process": { 00:16:59.542 "type": "rebuild", 00:16:59.542 "target": "spare", 00:16:59.542 "progress": { 00:16:59.542 "blocks": 151680, 00:16:59.542 "percent": 79 00:16:59.542 } 00:16:59.542 }, 00:16:59.542 "base_bdevs_list": [ 00:16:59.542 { 00:16:59.542 "name": "spare", 00:16:59.542 "uuid": "b02aadc0-b5d8-5c51-9aa2-b2d66daaa5db", 00:16:59.542 "is_configured": true, 00:16:59.542 "data_offset": 2048, 00:16:59.542 "data_size": 63488 00:16:59.542 }, 00:16:59.542 { 00:16:59.542 "name": "BaseBdev2", 00:16:59.542 "uuid": "d56e7f4f-352b-58b3-968c-4636f29ae62a", 00:16:59.542 "is_configured": true, 00:16:59.542 "data_offset": 2048, 00:16:59.542 "data_size": 63488 00:16:59.542 }, 00:16:59.542 { 00:16:59.542 "name": "BaseBdev3", 00:16:59.542 "uuid": "e2d23f75-cc43-563f-a5f3-5d5b337215ff", 00:16:59.542 "is_configured": true, 00:16:59.542 "data_offset": 2048, 00:16:59.542 "data_size": 63488 00:16:59.542 }, 00:16:59.542 { 00:16:59.542 "name": "BaseBdev4", 00:16:59.542 "uuid": "9a4e42ce-5cd6-553a-b544-c6520bda7226", 00:16:59.542 "is_configured": true, 00:16:59.542 "data_offset": 2048, 00:16:59.542 "data_size": 63488 00:16:59.542 } 00:16:59.542 ] 00:16:59.542 }' 00:16:59.542 12:09:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:59.802 12:09:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:59.802 12:09:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:59.802 12:09:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:59.802 12:09:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:00.738 12:09:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:00.738 12:09:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:00.738 12:09:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:00.738 12:09:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:00.738 12:09:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:00.738 12:09:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:00.738 12:09:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.738 12:09:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.738 12:09:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.738 12:09:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.738 12:09:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.738 12:09:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:00.738 "name": "raid_bdev1", 00:17:00.738 "uuid": "95e40f4b-7d77-4a34-9552-040bd5ee5810", 00:17:00.738 "strip_size_kb": 64, 00:17:00.738 "state": "online", 00:17:00.738 "raid_level": "raid5f", 00:17:00.738 "superblock": true, 00:17:00.738 "num_base_bdevs": 4, 00:17:00.738 "num_base_bdevs_discovered": 4, 00:17:00.738 "num_base_bdevs_operational": 4, 00:17:00.738 "process": { 00:17:00.738 "type": "rebuild", 00:17:00.738 "target": "spare", 00:17:00.738 "progress": { 00:17:00.738 "blocks": 174720, 00:17:00.738 "percent": 91 00:17:00.738 } 00:17:00.738 }, 00:17:00.738 "base_bdevs_list": [ 00:17:00.738 { 00:17:00.738 "name": "spare", 00:17:00.738 "uuid": "b02aadc0-b5d8-5c51-9aa2-b2d66daaa5db", 00:17:00.738 "is_configured": true, 00:17:00.738 "data_offset": 2048, 00:17:00.738 "data_size": 63488 00:17:00.738 }, 00:17:00.738 { 00:17:00.738 "name": "BaseBdev2", 00:17:00.738 "uuid": "d56e7f4f-352b-58b3-968c-4636f29ae62a", 00:17:00.738 "is_configured": true, 00:17:00.738 "data_offset": 2048, 00:17:00.738 "data_size": 63488 00:17:00.738 }, 00:17:00.738 { 00:17:00.738 "name": "BaseBdev3", 00:17:00.738 "uuid": "e2d23f75-cc43-563f-a5f3-5d5b337215ff", 00:17:00.738 "is_configured": true, 00:17:00.738 "data_offset": 2048, 00:17:00.738 "data_size": 63488 00:17:00.738 }, 00:17:00.738 { 00:17:00.738 "name": "BaseBdev4", 00:17:00.738 "uuid": "9a4e42ce-5cd6-553a-b544-c6520bda7226", 00:17:00.738 "is_configured": true, 00:17:00.738 "data_offset": 2048, 00:17:00.738 "data_size": 63488 00:17:00.738 } 00:17:00.738 ] 00:17:00.738 }' 00:17:00.738 12:09:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:00.738 12:09:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:00.738 12:09:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:00.997 12:09:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:00.997 12:09:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:01.565 [2024-11-19 12:09:04.888851] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:01.565 [2024-11-19 12:09:04.888965] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:01.565 [2024-11-19 12:09:04.889129] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:01.824 12:09:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:01.824 12:09:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:01.824 12:09:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:01.824 12:09:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:01.824 12:09:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:01.825 12:09:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:01.825 12:09:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.825 12:09:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.825 12:09:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.825 12:09:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.825 12:09:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.825 12:09:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:01.825 "name": "raid_bdev1", 00:17:01.825 "uuid": "95e40f4b-7d77-4a34-9552-040bd5ee5810", 00:17:01.825 "strip_size_kb": 64, 00:17:01.825 "state": "online", 00:17:01.825 "raid_level": "raid5f", 00:17:01.825 "superblock": true, 00:17:01.825 "num_base_bdevs": 4, 00:17:01.825 "num_base_bdevs_discovered": 4, 00:17:01.825 "num_base_bdevs_operational": 4, 00:17:01.825 "base_bdevs_list": [ 00:17:01.825 { 00:17:01.825 "name": "spare", 00:17:01.825 "uuid": "b02aadc0-b5d8-5c51-9aa2-b2d66daaa5db", 00:17:01.825 "is_configured": true, 00:17:01.825 "data_offset": 2048, 00:17:01.825 "data_size": 63488 00:17:01.825 }, 00:17:01.825 { 00:17:01.825 "name": "BaseBdev2", 00:17:01.825 "uuid": "d56e7f4f-352b-58b3-968c-4636f29ae62a", 00:17:01.825 "is_configured": true, 00:17:01.825 "data_offset": 2048, 00:17:01.825 "data_size": 63488 00:17:01.825 }, 00:17:01.825 { 00:17:01.825 "name": "BaseBdev3", 00:17:01.825 "uuid": "e2d23f75-cc43-563f-a5f3-5d5b337215ff", 00:17:01.825 "is_configured": true, 00:17:01.825 "data_offset": 2048, 00:17:01.825 "data_size": 63488 00:17:01.825 }, 00:17:01.825 { 00:17:01.825 "name": "BaseBdev4", 00:17:01.825 "uuid": "9a4e42ce-5cd6-553a-b544-c6520bda7226", 00:17:01.825 "is_configured": true, 00:17:01.825 "data_offset": 2048, 00:17:01.825 "data_size": 63488 00:17:01.825 } 00:17:01.825 ] 00:17:01.825 }' 00:17:01.825 12:09:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:02.084 12:09:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:02.084 12:09:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:02.084 12:09:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:02.084 12:09:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:17:02.084 12:09:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:02.084 12:09:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:02.084 12:09:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:02.084 12:09:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:02.084 12:09:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:02.084 12:09:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.084 12:09:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.084 12:09:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.084 12:09:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.084 12:09:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.084 12:09:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:02.084 "name": "raid_bdev1", 00:17:02.084 "uuid": "95e40f4b-7d77-4a34-9552-040bd5ee5810", 00:17:02.084 "strip_size_kb": 64, 00:17:02.084 "state": "online", 00:17:02.084 "raid_level": "raid5f", 00:17:02.084 "superblock": true, 00:17:02.084 "num_base_bdevs": 4, 00:17:02.084 "num_base_bdevs_discovered": 4, 00:17:02.084 "num_base_bdevs_operational": 4, 00:17:02.084 "base_bdevs_list": [ 00:17:02.084 { 00:17:02.084 "name": "spare", 00:17:02.084 "uuid": "b02aadc0-b5d8-5c51-9aa2-b2d66daaa5db", 00:17:02.084 "is_configured": true, 00:17:02.084 "data_offset": 2048, 00:17:02.084 "data_size": 63488 00:17:02.084 }, 00:17:02.084 { 00:17:02.084 "name": "BaseBdev2", 00:17:02.084 "uuid": "d56e7f4f-352b-58b3-968c-4636f29ae62a", 00:17:02.084 "is_configured": true, 00:17:02.084 "data_offset": 2048, 00:17:02.084 "data_size": 63488 00:17:02.084 }, 00:17:02.084 { 00:17:02.084 "name": "BaseBdev3", 00:17:02.084 "uuid": "e2d23f75-cc43-563f-a5f3-5d5b337215ff", 00:17:02.084 "is_configured": true, 00:17:02.084 "data_offset": 2048, 00:17:02.084 "data_size": 63488 00:17:02.084 }, 00:17:02.084 { 00:17:02.084 "name": "BaseBdev4", 00:17:02.084 "uuid": "9a4e42ce-5cd6-553a-b544-c6520bda7226", 00:17:02.084 "is_configured": true, 00:17:02.084 "data_offset": 2048, 00:17:02.084 "data_size": 63488 00:17:02.084 } 00:17:02.084 ] 00:17:02.084 }' 00:17:02.084 12:09:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:02.084 12:09:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:02.084 12:09:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:02.084 12:09:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:02.084 12:09:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:02.085 12:09:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:02.085 12:09:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:02.085 12:09:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:02.085 12:09:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:02.085 12:09:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:02.085 12:09:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:02.085 12:09:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:02.085 12:09:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:02.085 12:09:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:02.085 12:09:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.085 12:09:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.085 12:09:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.085 12:09:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.085 12:09:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.344 12:09:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:02.344 "name": "raid_bdev1", 00:17:02.344 "uuid": "95e40f4b-7d77-4a34-9552-040bd5ee5810", 00:17:02.344 "strip_size_kb": 64, 00:17:02.344 "state": "online", 00:17:02.344 "raid_level": "raid5f", 00:17:02.344 "superblock": true, 00:17:02.344 "num_base_bdevs": 4, 00:17:02.344 "num_base_bdevs_discovered": 4, 00:17:02.344 "num_base_bdevs_operational": 4, 00:17:02.344 "base_bdevs_list": [ 00:17:02.344 { 00:17:02.344 "name": "spare", 00:17:02.344 "uuid": "b02aadc0-b5d8-5c51-9aa2-b2d66daaa5db", 00:17:02.344 "is_configured": true, 00:17:02.344 "data_offset": 2048, 00:17:02.344 "data_size": 63488 00:17:02.344 }, 00:17:02.344 { 00:17:02.344 "name": "BaseBdev2", 00:17:02.344 "uuid": "d56e7f4f-352b-58b3-968c-4636f29ae62a", 00:17:02.344 "is_configured": true, 00:17:02.344 "data_offset": 2048, 00:17:02.344 "data_size": 63488 00:17:02.344 }, 00:17:02.344 { 00:17:02.344 "name": "BaseBdev3", 00:17:02.344 "uuid": "e2d23f75-cc43-563f-a5f3-5d5b337215ff", 00:17:02.344 "is_configured": true, 00:17:02.344 "data_offset": 2048, 00:17:02.344 "data_size": 63488 00:17:02.344 }, 00:17:02.344 { 00:17:02.344 "name": "BaseBdev4", 00:17:02.344 "uuid": "9a4e42ce-5cd6-553a-b544-c6520bda7226", 00:17:02.344 "is_configured": true, 00:17:02.344 "data_offset": 2048, 00:17:02.344 "data_size": 63488 00:17:02.344 } 00:17:02.344 ] 00:17:02.344 }' 00:17:02.344 12:09:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:02.344 12:09:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.604 12:09:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:02.604 12:09:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.604 12:09:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.604 [2024-11-19 12:09:05.912584] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:02.605 [2024-11-19 12:09:05.912663] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:02.605 [2024-11-19 12:09:05.912763] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:02.605 [2024-11-19 12:09:05.912878] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:02.605 [2024-11-19 12:09:05.912947] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:02.605 12:09:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.605 12:09:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:17:02.605 12:09:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.605 12:09:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.605 12:09:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.605 12:09:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.605 12:09:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:02.605 12:09:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:02.605 12:09:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:02.605 12:09:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:02.605 12:09:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:02.605 12:09:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:02.605 12:09:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:02.605 12:09:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:02.605 12:09:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:02.605 12:09:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:02.605 12:09:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:02.605 12:09:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:02.605 12:09:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:02.865 /dev/nbd0 00:17:02.865 12:09:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:02.865 12:09:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:02.865 12:09:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:02.865 12:09:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:02.865 12:09:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:02.865 12:09:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:02.865 12:09:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:02.865 12:09:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:02.865 12:09:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:02.865 12:09:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:02.865 12:09:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:02.865 1+0 records in 00:17:02.865 1+0 records out 00:17:02.865 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000377251 s, 10.9 MB/s 00:17:02.865 12:09:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:02.865 12:09:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:02.865 12:09:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:02.865 12:09:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:02.865 12:09:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:02.865 12:09:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:02.865 12:09:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:02.865 12:09:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:03.125 /dev/nbd1 00:17:03.125 12:09:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:03.125 12:09:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:03.125 12:09:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:03.125 12:09:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:03.125 12:09:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:03.125 12:09:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:03.125 12:09:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:03.125 12:09:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:03.125 12:09:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:03.125 12:09:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:03.125 12:09:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:03.125 1+0 records in 00:17:03.125 1+0 records out 00:17:03.125 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000279057 s, 14.7 MB/s 00:17:03.125 12:09:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:03.125 12:09:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:03.125 12:09:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:03.125 12:09:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:03.125 12:09:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:03.125 12:09:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:03.125 12:09:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:03.125 12:09:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:03.387 12:09:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:03.387 12:09:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:03.387 12:09:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:03.387 12:09:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:03.387 12:09:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:03.387 12:09:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:03.387 12:09:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:03.650 12:09:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:03.650 12:09:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:03.650 12:09:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:03.650 12:09:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:03.650 12:09:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:03.650 12:09:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:03.650 12:09:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:03.650 12:09:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:03.650 12:09:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:03.650 12:09:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:03.911 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:03.911 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:03.911 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:03.911 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:03.911 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:03.911 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:03.911 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:03.911 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:03.911 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:03.911 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:03.911 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.911 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.911 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.911 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:03.911 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.911 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.911 [2024-11-19 12:09:07.084239] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:03.911 [2024-11-19 12:09:07.084361] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:03.911 [2024-11-19 12:09:07.084423] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:17:03.911 [2024-11-19 12:09:07.084459] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:03.911 [2024-11-19 12:09:07.086705] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:03.911 [2024-11-19 12:09:07.086778] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:03.911 [2024-11-19 12:09:07.086876] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:03.911 [2024-11-19 12:09:07.086953] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:03.911 [2024-11-19 12:09:07.087200] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:03.911 [2024-11-19 12:09:07.087353] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:03.911 [2024-11-19 12:09:07.087472] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:03.911 spare 00:17:03.911 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.911 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:03.911 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.911 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.911 [2024-11-19 12:09:07.187414] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:03.911 [2024-11-19 12:09:07.187478] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:03.911 [2024-11-19 12:09:07.187772] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:17:03.911 [2024-11-19 12:09:07.194554] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:03.911 [2024-11-19 12:09:07.194609] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:03.911 [2024-11-19 12:09:07.194834] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:03.911 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.911 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:03.911 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:03.911 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:03.911 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:03.911 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:03.911 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:03.911 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:03.911 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:03.911 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:03.911 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:03.911 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.911 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.911 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.911 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.911 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.911 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:03.911 "name": "raid_bdev1", 00:17:03.911 "uuid": "95e40f4b-7d77-4a34-9552-040bd5ee5810", 00:17:03.911 "strip_size_kb": 64, 00:17:03.911 "state": "online", 00:17:03.911 "raid_level": "raid5f", 00:17:03.911 "superblock": true, 00:17:03.911 "num_base_bdevs": 4, 00:17:03.911 "num_base_bdevs_discovered": 4, 00:17:03.911 "num_base_bdevs_operational": 4, 00:17:03.911 "base_bdevs_list": [ 00:17:03.911 { 00:17:03.911 "name": "spare", 00:17:03.911 "uuid": "b02aadc0-b5d8-5c51-9aa2-b2d66daaa5db", 00:17:03.911 "is_configured": true, 00:17:03.911 "data_offset": 2048, 00:17:03.911 "data_size": 63488 00:17:03.911 }, 00:17:03.911 { 00:17:03.911 "name": "BaseBdev2", 00:17:03.911 "uuid": "d56e7f4f-352b-58b3-968c-4636f29ae62a", 00:17:03.911 "is_configured": true, 00:17:03.911 "data_offset": 2048, 00:17:03.911 "data_size": 63488 00:17:03.911 }, 00:17:03.911 { 00:17:03.911 "name": "BaseBdev3", 00:17:03.911 "uuid": "e2d23f75-cc43-563f-a5f3-5d5b337215ff", 00:17:03.911 "is_configured": true, 00:17:03.911 "data_offset": 2048, 00:17:03.911 "data_size": 63488 00:17:03.911 }, 00:17:03.911 { 00:17:03.911 "name": "BaseBdev4", 00:17:03.911 "uuid": "9a4e42ce-5cd6-553a-b544-c6520bda7226", 00:17:03.911 "is_configured": true, 00:17:03.911 "data_offset": 2048, 00:17:03.911 "data_size": 63488 00:17:03.911 } 00:17:03.911 ] 00:17:03.911 }' 00:17:03.911 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:03.911 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.482 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:04.482 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:04.482 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:04.482 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:04.482 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:04.482 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.482 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.482 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.482 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.482 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.482 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:04.482 "name": "raid_bdev1", 00:17:04.482 "uuid": "95e40f4b-7d77-4a34-9552-040bd5ee5810", 00:17:04.482 "strip_size_kb": 64, 00:17:04.482 "state": "online", 00:17:04.482 "raid_level": "raid5f", 00:17:04.482 "superblock": true, 00:17:04.482 "num_base_bdevs": 4, 00:17:04.482 "num_base_bdevs_discovered": 4, 00:17:04.482 "num_base_bdevs_operational": 4, 00:17:04.482 "base_bdevs_list": [ 00:17:04.482 { 00:17:04.482 "name": "spare", 00:17:04.482 "uuid": "b02aadc0-b5d8-5c51-9aa2-b2d66daaa5db", 00:17:04.482 "is_configured": true, 00:17:04.482 "data_offset": 2048, 00:17:04.482 "data_size": 63488 00:17:04.482 }, 00:17:04.482 { 00:17:04.482 "name": "BaseBdev2", 00:17:04.482 "uuid": "d56e7f4f-352b-58b3-968c-4636f29ae62a", 00:17:04.482 "is_configured": true, 00:17:04.482 "data_offset": 2048, 00:17:04.482 "data_size": 63488 00:17:04.482 }, 00:17:04.482 { 00:17:04.482 "name": "BaseBdev3", 00:17:04.482 "uuid": "e2d23f75-cc43-563f-a5f3-5d5b337215ff", 00:17:04.482 "is_configured": true, 00:17:04.482 "data_offset": 2048, 00:17:04.482 "data_size": 63488 00:17:04.482 }, 00:17:04.482 { 00:17:04.482 "name": "BaseBdev4", 00:17:04.482 "uuid": "9a4e42ce-5cd6-553a-b544-c6520bda7226", 00:17:04.482 "is_configured": true, 00:17:04.482 "data_offset": 2048, 00:17:04.482 "data_size": 63488 00:17:04.482 } 00:17:04.482 ] 00:17:04.482 }' 00:17:04.482 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:04.482 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:04.482 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:04.482 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:04.482 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:04.482 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.482 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.482 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.482 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.482 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:04.482 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:04.482 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.482 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.482 [2024-11-19 12:09:07.778070] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:04.482 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.482 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:04.482 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:04.482 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:04.482 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:04.482 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:04.482 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:04.482 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:04.482 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:04.482 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:04.482 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:04.482 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.483 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.483 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.483 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.483 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.483 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:04.483 "name": "raid_bdev1", 00:17:04.483 "uuid": "95e40f4b-7d77-4a34-9552-040bd5ee5810", 00:17:04.483 "strip_size_kb": 64, 00:17:04.483 "state": "online", 00:17:04.483 "raid_level": "raid5f", 00:17:04.483 "superblock": true, 00:17:04.483 "num_base_bdevs": 4, 00:17:04.483 "num_base_bdevs_discovered": 3, 00:17:04.483 "num_base_bdevs_operational": 3, 00:17:04.483 "base_bdevs_list": [ 00:17:04.483 { 00:17:04.483 "name": null, 00:17:04.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:04.483 "is_configured": false, 00:17:04.483 "data_offset": 0, 00:17:04.483 "data_size": 63488 00:17:04.483 }, 00:17:04.483 { 00:17:04.483 "name": "BaseBdev2", 00:17:04.483 "uuid": "d56e7f4f-352b-58b3-968c-4636f29ae62a", 00:17:04.483 "is_configured": true, 00:17:04.483 "data_offset": 2048, 00:17:04.483 "data_size": 63488 00:17:04.483 }, 00:17:04.483 { 00:17:04.483 "name": "BaseBdev3", 00:17:04.483 "uuid": "e2d23f75-cc43-563f-a5f3-5d5b337215ff", 00:17:04.483 "is_configured": true, 00:17:04.483 "data_offset": 2048, 00:17:04.483 "data_size": 63488 00:17:04.483 }, 00:17:04.483 { 00:17:04.483 "name": "BaseBdev4", 00:17:04.483 "uuid": "9a4e42ce-5cd6-553a-b544-c6520bda7226", 00:17:04.483 "is_configured": true, 00:17:04.483 "data_offset": 2048, 00:17:04.483 "data_size": 63488 00:17:04.483 } 00:17:04.483 ] 00:17:04.483 }' 00:17:04.483 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:04.483 12:09:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.053 12:09:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:05.053 12:09:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.053 12:09:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.053 [2024-11-19 12:09:08.205370] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:05.053 [2024-11-19 12:09:08.205621] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:05.053 [2024-11-19 12:09:08.205686] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:05.053 [2024-11-19 12:09:08.205749] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:05.053 [2024-11-19 12:09:08.220519] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:17:05.053 12:09:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.053 12:09:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:05.053 [2024-11-19 12:09:08.229321] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:06.000 12:09:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:06.000 12:09:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:06.000 12:09:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:06.000 12:09:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:06.000 12:09:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:06.000 12:09:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.000 12:09:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.000 12:09:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.000 12:09:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.001 12:09:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.001 12:09:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:06.001 "name": "raid_bdev1", 00:17:06.001 "uuid": "95e40f4b-7d77-4a34-9552-040bd5ee5810", 00:17:06.001 "strip_size_kb": 64, 00:17:06.001 "state": "online", 00:17:06.001 "raid_level": "raid5f", 00:17:06.001 "superblock": true, 00:17:06.001 "num_base_bdevs": 4, 00:17:06.001 "num_base_bdevs_discovered": 4, 00:17:06.001 "num_base_bdevs_operational": 4, 00:17:06.001 "process": { 00:17:06.001 "type": "rebuild", 00:17:06.001 "target": "spare", 00:17:06.001 "progress": { 00:17:06.001 "blocks": 19200, 00:17:06.001 "percent": 10 00:17:06.001 } 00:17:06.001 }, 00:17:06.001 "base_bdevs_list": [ 00:17:06.001 { 00:17:06.001 "name": "spare", 00:17:06.001 "uuid": "b02aadc0-b5d8-5c51-9aa2-b2d66daaa5db", 00:17:06.001 "is_configured": true, 00:17:06.001 "data_offset": 2048, 00:17:06.001 "data_size": 63488 00:17:06.001 }, 00:17:06.001 { 00:17:06.001 "name": "BaseBdev2", 00:17:06.001 "uuid": "d56e7f4f-352b-58b3-968c-4636f29ae62a", 00:17:06.001 "is_configured": true, 00:17:06.001 "data_offset": 2048, 00:17:06.001 "data_size": 63488 00:17:06.001 }, 00:17:06.001 { 00:17:06.001 "name": "BaseBdev3", 00:17:06.001 "uuid": "e2d23f75-cc43-563f-a5f3-5d5b337215ff", 00:17:06.001 "is_configured": true, 00:17:06.001 "data_offset": 2048, 00:17:06.001 "data_size": 63488 00:17:06.001 }, 00:17:06.001 { 00:17:06.001 "name": "BaseBdev4", 00:17:06.001 "uuid": "9a4e42ce-5cd6-553a-b544-c6520bda7226", 00:17:06.001 "is_configured": true, 00:17:06.001 "data_offset": 2048, 00:17:06.001 "data_size": 63488 00:17:06.001 } 00:17:06.001 ] 00:17:06.001 }' 00:17:06.001 12:09:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:06.001 12:09:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:06.001 12:09:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:06.001 12:09:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:06.001 12:09:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:06.001 12:09:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.001 12:09:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.001 [2024-11-19 12:09:09.344266] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:06.260 [2024-11-19 12:09:09.435227] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:06.260 [2024-11-19 12:09:09.435361] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:06.260 [2024-11-19 12:09:09.435400] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:06.260 [2024-11-19 12:09:09.435424] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:06.260 12:09:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.260 12:09:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:06.260 12:09:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:06.260 12:09:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:06.260 12:09:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:06.260 12:09:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:06.260 12:09:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:06.260 12:09:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.260 12:09:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.260 12:09:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.260 12:09:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:06.260 12:09:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.260 12:09:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.260 12:09:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.260 12:09:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.260 12:09:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.260 12:09:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:06.260 "name": "raid_bdev1", 00:17:06.260 "uuid": "95e40f4b-7d77-4a34-9552-040bd5ee5810", 00:17:06.260 "strip_size_kb": 64, 00:17:06.260 "state": "online", 00:17:06.260 "raid_level": "raid5f", 00:17:06.260 "superblock": true, 00:17:06.260 "num_base_bdevs": 4, 00:17:06.260 "num_base_bdevs_discovered": 3, 00:17:06.260 "num_base_bdevs_operational": 3, 00:17:06.260 "base_bdevs_list": [ 00:17:06.260 { 00:17:06.260 "name": null, 00:17:06.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.260 "is_configured": false, 00:17:06.260 "data_offset": 0, 00:17:06.260 "data_size": 63488 00:17:06.260 }, 00:17:06.260 { 00:17:06.260 "name": "BaseBdev2", 00:17:06.260 "uuid": "d56e7f4f-352b-58b3-968c-4636f29ae62a", 00:17:06.260 "is_configured": true, 00:17:06.260 "data_offset": 2048, 00:17:06.260 "data_size": 63488 00:17:06.260 }, 00:17:06.260 { 00:17:06.260 "name": "BaseBdev3", 00:17:06.260 "uuid": "e2d23f75-cc43-563f-a5f3-5d5b337215ff", 00:17:06.260 "is_configured": true, 00:17:06.260 "data_offset": 2048, 00:17:06.260 "data_size": 63488 00:17:06.260 }, 00:17:06.260 { 00:17:06.260 "name": "BaseBdev4", 00:17:06.260 "uuid": "9a4e42ce-5cd6-553a-b544-c6520bda7226", 00:17:06.260 "is_configured": true, 00:17:06.260 "data_offset": 2048, 00:17:06.260 "data_size": 63488 00:17:06.260 } 00:17:06.260 ] 00:17:06.260 }' 00:17:06.260 12:09:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:06.260 12:09:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.519 12:09:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:06.519 12:09:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.519 12:09:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.778 [2024-11-19 12:09:09.899777] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:06.778 [2024-11-19 12:09:09.899897] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:06.778 [2024-11-19 12:09:09.899947] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:17:06.778 [2024-11-19 12:09:09.899987] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:06.778 [2024-11-19 12:09:09.900559] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:06.778 [2024-11-19 12:09:09.900628] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:06.778 [2024-11-19 12:09:09.900760] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:06.778 [2024-11-19 12:09:09.900781] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:06.778 [2024-11-19 12:09:09.900791] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:06.778 [2024-11-19 12:09:09.900819] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:06.778 [2024-11-19 12:09:09.915280] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:17:06.778 spare 00:17:06.778 12:09:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.778 12:09:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:06.778 [2024-11-19 12:09:09.924271] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:07.716 12:09:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:07.716 12:09:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:07.716 12:09:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:07.716 12:09:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:07.716 12:09:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:07.716 12:09:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.716 12:09:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.716 12:09:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.716 12:09:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.716 12:09:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.716 12:09:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:07.716 "name": "raid_bdev1", 00:17:07.716 "uuid": "95e40f4b-7d77-4a34-9552-040bd5ee5810", 00:17:07.716 "strip_size_kb": 64, 00:17:07.716 "state": "online", 00:17:07.716 "raid_level": "raid5f", 00:17:07.716 "superblock": true, 00:17:07.716 "num_base_bdevs": 4, 00:17:07.716 "num_base_bdevs_discovered": 4, 00:17:07.716 "num_base_bdevs_operational": 4, 00:17:07.716 "process": { 00:17:07.716 "type": "rebuild", 00:17:07.716 "target": "spare", 00:17:07.716 "progress": { 00:17:07.716 "blocks": 19200, 00:17:07.716 "percent": 10 00:17:07.716 } 00:17:07.716 }, 00:17:07.716 "base_bdevs_list": [ 00:17:07.716 { 00:17:07.716 "name": "spare", 00:17:07.716 "uuid": "b02aadc0-b5d8-5c51-9aa2-b2d66daaa5db", 00:17:07.716 "is_configured": true, 00:17:07.716 "data_offset": 2048, 00:17:07.716 "data_size": 63488 00:17:07.716 }, 00:17:07.716 { 00:17:07.716 "name": "BaseBdev2", 00:17:07.716 "uuid": "d56e7f4f-352b-58b3-968c-4636f29ae62a", 00:17:07.716 "is_configured": true, 00:17:07.716 "data_offset": 2048, 00:17:07.716 "data_size": 63488 00:17:07.716 }, 00:17:07.716 { 00:17:07.716 "name": "BaseBdev3", 00:17:07.716 "uuid": "e2d23f75-cc43-563f-a5f3-5d5b337215ff", 00:17:07.716 "is_configured": true, 00:17:07.716 "data_offset": 2048, 00:17:07.716 "data_size": 63488 00:17:07.716 }, 00:17:07.716 { 00:17:07.716 "name": "BaseBdev4", 00:17:07.716 "uuid": "9a4e42ce-5cd6-553a-b544-c6520bda7226", 00:17:07.716 "is_configured": true, 00:17:07.716 "data_offset": 2048, 00:17:07.716 "data_size": 63488 00:17:07.716 } 00:17:07.716 ] 00:17:07.716 }' 00:17:07.716 12:09:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:07.716 12:09:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:07.716 12:09:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:07.716 12:09:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:07.716 12:09:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:07.716 12:09:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.716 12:09:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.716 [2024-11-19 12:09:11.075353] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:07.976 [2024-11-19 12:09:11.130193] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:07.976 [2024-11-19 12:09:11.130245] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:07.976 [2024-11-19 12:09:11.130279] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:07.976 [2024-11-19 12:09:11.130287] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:07.976 12:09:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.976 12:09:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:07.976 12:09:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:07.976 12:09:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:07.976 12:09:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:07.976 12:09:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:07.976 12:09:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:07.976 12:09:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:07.976 12:09:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:07.976 12:09:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:07.976 12:09:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:07.976 12:09:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.976 12:09:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.976 12:09:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.976 12:09:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.976 12:09:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.976 12:09:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.976 "name": "raid_bdev1", 00:17:07.976 "uuid": "95e40f4b-7d77-4a34-9552-040bd5ee5810", 00:17:07.976 "strip_size_kb": 64, 00:17:07.976 "state": "online", 00:17:07.976 "raid_level": "raid5f", 00:17:07.976 "superblock": true, 00:17:07.976 "num_base_bdevs": 4, 00:17:07.976 "num_base_bdevs_discovered": 3, 00:17:07.976 "num_base_bdevs_operational": 3, 00:17:07.976 "base_bdevs_list": [ 00:17:07.976 { 00:17:07.976 "name": null, 00:17:07.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.976 "is_configured": false, 00:17:07.976 "data_offset": 0, 00:17:07.976 "data_size": 63488 00:17:07.976 }, 00:17:07.976 { 00:17:07.976 "name": "BaseBdev2", 00:17:07.976 "uuid": "d56e7f4f-352b-58b3-968c-4636f29ae62a", 00:17:07.976 "is_configured": true, 00:17:07.976 "data_offset": 2048, 00:17:07.976 "data_size": 63488 00:17:07.976 }, 00:17:07.976 { 00:17:07.976 "name": "BaseBdev3", 00:17:07.976 "uuid": "e2d23f75-cc43-563f-a5f3-5d5b337215ff", 00:17:07.976 "is_configured": true, 00:17:07.976 "data_offset": 2048, 00:17:07.976 "data_size": 63488 00:17:07.976 }, 00:17:07.976 { 00:17:07.976 "name": "BaseBdev4", 00:17:07.976 "uuid": "9a4e42ce-5cd6-553a-b544-c6520bda7226", 00:17:07.976 "is_configured": true, 00:17:07.976 "data_offset": 2048, 00:17:07.977 "data_size": 63488 00:17:07.977 } 00:17:07.977 ] 00:17:07.977 }' 00:17:07.977 12:09:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:07.977 12:09:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.546 12:09:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:08.546 12:09:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:08.546 12:09:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:08.546 12:09:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:08.546 12:09:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:08.546 12:09:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.546 12:09:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.546 12:09:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.546 12:09:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.546 12:09:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.546 12:09:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:08.546 "name": "raid_bdev1", 00:17:08.546 "uuid": "95e40f4b-7d77-4a34-9552-040bd5ee5810", 00:17:08.546 "strip_size_kb": 64, 00:17:08.546 "state": "online", 00:17:08.546 "raid_level": "raid5f", 00:17:08.546 "superblock": true, 00:17:08.546 "num_base_bdevs": 4, 00:17:08.546 "num_base_bdevs_discovered": 3, 00:17:08.546 "num_base_bdevs_operational": 3, 00:17:08.546 "base_bdevs_list": [ 00:17:08.546 { 00:17:08.546 "name": null, 00:17:08.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.546 "is_configured": false, 00:17:08.546 "data_offset": 0, 00:17:08.546 "data_size": 63488 00:17:08.546 }, 00:17:08.546 { 00:17:08.546 "name": "BaseBdev2", 00:17:08.546 "uuid": "d56e7f4f-352b-58b3-968c-4636f29ae62a", 00:17:08.546 "is_configured": true, 00:17:08.546 "data_offset": 2048, 00:17:08.546 "data_size": 63488 00:17:08.546 }, 00:17:08.546 { 00:17:08.546 "name": "BaseBdev3", 00:17:08.546 "uuid": "e2d23f75-cc43-563f-a5f3-5d5b337215ff", 00:17:08.546 "is_configured": true, 00:17:08.546 "data_offset": 2048, 00:17:08.546 "data_size": 63488 00:17:08.546 }, 00:17:08.546 { 00:17:08.546 "name": "BaseBdev4", 00:17:08.546 "uuid": "9a4e42ce-5cd6-553a-b544-c6520bda7226", 00:17:08.546 "is_configured": true, 00:17:08.546 "data_offset": 2048, 00:17:08.546 "data_size": 63488 00:17:08.546 } 00:17:08.546 ] 00:17:08.546 }' 00:17:08.546 12:09:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:08.546 12:09:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:08.546 12:09:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:08.546 12:09:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:08.546 12:09:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:08.546 12:09:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.546 12:09:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.546 12:09:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.546 12:09:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:08.546 12:09:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.546 12:09:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.546 [2024-11-19 12:09:11.762393] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:08.546 [2024-11-19 12:09:11.762507] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:08.546 [2024-11-19 12:09:11.762534] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:17:08.546 [2024-11-19 12:09:11.762545] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:08.546 [2024-11-19 12:09:11.763019] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:08.546 [2024-11-19 12:09:11.763038] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:08.546 [2024-11-19 12:09:11.763122] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:08.546 [2024-11-19 12:09:11.763137] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:08.546 [2024-11-19 12:09:11.763147] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:08.546 [2024-11-19 12:09:11.763168] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:08.546 BaseBdev1 00:17:08.546 12:09:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.546 12:09:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:09.485 12:09:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:09.485 12:09:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:09.485 12:09:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:09.485 12:09:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:09.485 12:09:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:09.485 12:09:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:09.485 12:09:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:09.485 12:09:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:09.485 12:09:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:09.485 12:09:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:09.485 12:09:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.485 12:09:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.485 12:09:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.485 12:09:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.485 12:09:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.485 12:09:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:09.485 "name": "raid_bdev1", 00:17:09.485 "uuid": "95e40f4b-7d77-4a34-9552-040bd5ee5810", 00:17:09.486 "strip_size_kb": 64, 00:17:09.486 "state": "online", 00:17:09.486 "raid_level": "raid5f", 00:17:09.486 "superblock": true, 00:17:09.486 "num_base_bdevs": 4, 00:17:09.486 "num_base_bdevs_discovered": 3, 00:17:09.486 "num_base_bdevs_operational": 3, 00:17:09.486 "base_bdevs_list": [ 00:17:09.486 { 00:17:09.486 "name": null, 00:17:09.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.486 "is_configured": false, 00:17:09.486 "data_offset": 0, 00:17:09.486 "data_size": 63488 00:17:09.486 }, 00:17:09.486 { 00:17:09.486 "name": "BaseBdev2", 00:17:09.486 "uuid": "d56e7f4f-352b-58b3-968c-4636f29ae62a", 00:17:09.486 "is_configured": true, 00:17:09.486 "data_offset": 2048, 00:17:09.486 "data_size": 63488 00:17:09.486 }, 00:17:09.486 { 00:17:09.486 "name": "BaseBdev3", 00:17:09.486 "uuid": "e2d23f75-cc43-563f-a5f3-5d5b337215ff", 00:17:09.486 "is_configured": true, 00:17:09.486 "data_offset": 2048, 00:17:09.486 "data_size": 63488 00:17:09.486 }, 00:17:09.486 { 00:17:09.486 "name": "BaseBdev4", 00:17:09.486 "uuid": "9a4e42ce-5cd6-553a-b544-c6520bda7226", 00:17:09.486 "is_configured": true, 00:17:09.486 "data_offset": 2048, 00:17:09.486 "data_size": 63488 00:17:09.486 } 00:17:09.486 ] 00:17:09.486 }' 00:17:09.486 12:09:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:09.486 12:09:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.055 12:09:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:10.055 12:09:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:10.055 12:09:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:10.055 12:09:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:10.055 12:09:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:10.055 12:09:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.055 12:09:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.055 12:09:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.055 12:09:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.055 12:09:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.055 12:09:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:10.055 "name": "raid_bdev1", 00:17:10.055 "uuid": "95e40f4b-7d77-4a34-9552-040bd5ee5810", 00:17:10.055 "strip_size_kb": 64, 00:17:10.055 "state": "online", 00:17:10.055 "raid_level": "raid5f", 00:17:10.055 "superblock": true, 00:17:10.055 "num_base_bdevs": 4, 00:17:10.055 "num_base_bdevs_discovered": 3, 00:17:10.055 "num_base_bdevs_operational": 3, 00:17:10.055 "base_bdevs_list": [ 00:17:10.055 { 00:17:10.055 "name": null, 00:17:10.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.055 "is_configured": false, 00:17:10.055 "data_offset": 0, 00:17:10.055 "data_size": 63488 00:17:10.055 }, 00:17:10.055 { 00:17:10.055 "name": "BaseBdev2", 00:17:10.055 "uuid": "d56e7f4f-352b-58b3-968c-4636f29ae62a", 00:17:10.055 "is_configured": true, 00:17:10.055 "data_offset": 2048, 00:17:10.055 "data_size": 63488 00:17:10.055 }, 00:17:10.055 { 00:17:10.055 "name": "BaseBdev3", 00:17:10.055 "uuid": "e2d23f75-cc43-563f-a5f3-5d5b337215ff", 00:17:10.055 "is_configured": true, 00:17:10.055 "data_offset": 2048, 00:17:10.055 "data_size": 63488 00:17:10.055 }, 00:17:10.055 { 00:17:10.055 "name": "BaseBdev4", 00:17:10.055 "uuid": "9a4e42ce-5cd6-553a-b544-c6520bda7226", 00:17:10.055 "is_configured": true, 00:17:10.055 "data_offset": 2048, 00:17:10.055 "data_size": 63488 00:17:10.055 } 00:17:10.055 ] 00:17:10.055 }' 00:17:10.055 12:09:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:10.055 12:09:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:10.055 12:09:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:10.055 12:09:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:10.055 12:09:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:10.055 12:09:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:17:10.055 12:09:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:10.055 12:09:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:10.055 12:09:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:10.055 12:09:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:10.055 12:09:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:10.055 12:09:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:10.055 12:09:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.055 12:09:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.055 [2024-11-19 12:09:13.375743] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:10.055 [2024-11-19 12:09:13.375913] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:10.055 [2024-11-19 12:09:13.375928] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:10.055 request: 00:17:10.055 { 00:17:10.055 "base_bdev": "BaseBdev1", 00:17:10.055 "raid_bdev": "raid_bdev1", 00:17:10.055 "method": "bdev_raid_add_base_bdev", 00:17:10.055 "req_id": 1 00:17:10.055 } 00:17:10.055 Got JSON-RPC error response 00:17:10.055 response: 00:17:10.055 { 00:17:10.055 "code": -22, 00:17:10.055 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:10.055 } 00:17:10.055 12:09:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:10.055 12:09:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:17:10.055 12:09:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:10.055 12:09:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:10.056 12:09:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:10.056 12:09:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:11.436 12:09:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:11.436 12:09:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:11.436 12:09:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:11.436 12:09:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:11.436 12:09:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:11.436 12:09:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:11.436 12:09:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:11.436 12:09:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:11.436 12:09:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:11.436 12:09:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:11.436 12:09:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.436 12:09:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.436 12:09:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.436 12:09:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.436 12:09:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.436 12:09:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:11.436 "name": "raid_bdev1", 00:17:11.436 "uuid": "95e40f4b-7d77-4a34-9552-040bd5ee5810", 00:17:11.436 "strip_size_kb": 64, 00:17:11.436 "state": "online", 00:17:11.436 "raid_level": "raid5f", 00:17:11.436 "superblock": true, 00:17:11.436 "num_base_bdevs": 4, 00:17:11.436 "num_base_bdevs_discovered": 3, 00:17:11.436 "num_base_bdevs_operational": 3, 00:17:11.436 "base_bdevs_list": [ 00:17:11.436 { 00:17:11.436 "name": null, 00:17:11.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.436 "is_configured": false, 00:17:11.436 "data_offset": 0, 00:17:11.436 "data_size": 63488 00:17:11.436 }, 00:17:11.436 { 00:17:11.436 "name": "BaseBdev2", 00:17:11.436 "uuid": "d56e7f4f-352b-58b3-968c-4636f29ae62a", 00:17:11.436 "is_configured": true, 00:17:11.436 "data_offset": 2048, 00:17:11.436 "data_size": 63488 00:17:11.436 }, 00:17:11.436 { 00:17:11.436 "name": "BaseBdev3", 00:17:11.436 "uuid": "e2d23f75-cc43-563f-a5f3-5d5b337215ff", 00:17:11.436 "is_configured": true, 00:17:11.436 "data_offset": 2048, 00:17:11.436 "data_size": 63488 00:17:11.436 }, 00:17:11.436 { 00:17:11.436 "name": "BaseBdev4", 00:17:11.436 "uuid": "9a4e42ce-5cd6-553a-b544-c6520bda7226", 00:17:11.436 "is_configured": true, 00:17:11.436 "data_offset": 2048, 00:17:11.436 "data_size": 63488 00:17:11.436 } 00:17:11.436 ] 00:17:11.436 }' 00:17:11.436 12:09:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:11.436 12:09:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.695 12:09:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:11.695 12:09:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:11.695 12:09:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:11.695 12:09:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:11.695 12:09:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:11.695 12:09:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.695 12:09:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.695 12:09:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.695 12:09:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.695 12:09:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.695 12:09:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:11.695 "name": "raid_bdev1", 00:17:11.695 "uuid": "95e40f4b-7d77-4a34-9552-040bd5ee5810", 00:17:11.695 "strip_size_kb": 64, 00:17:11.696 "state": "online", 00:17:11.696 "raid_level": "raid5f", 00:17:11.696 "superblock": true, 00:17:11.696 "num_base_bdevs": 4, 00:17:11.696 "num_base_bdevs_discovered": 3, 00:17:11.696 "num_base_bdevs_operational": 3, 00:17:11.696 "base_bdevs_list": [ 00:17:11.696 { 00:17:11.696 "name": null, 00:17:11.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.696 "is_configured": false, 00:17:11.696 "data_offset": 0, 00:17:11.696 "data_size": 63488 00:17:11.696 }, 00:17:11.696 { 00:17:11.696 "name": "BaseBdev2", 00:17:11.696 "uuid": "d56e7f4f-352b-58b3-968c-4636f29ae62a", 00:17:11.696 "is_configured": true, 00:17:11.696 "data_offset": 2048, 00:17:11.696 "data_size": 63488 00:17:11.696 }, 00:17:11.696 { 00:17:11.696 "name": "BaseBdev3", 00:17:11.696 "uuid": "e2d23f75-cc43-563f-a5f3-5d5b337215ff", 00:17:11.696 "is_configured": true, 00:17:11.696 "data_offset": 2048, 00:17:11.696 "data_size": 63488 00:17:11.696 }, 00:17:11.696 { 00:17:11.696 "name": "BaseBdev4", 00:17:11.696 "uuid": "9a4e42ce-5cd6-553a-b544-c6520bda7226", 00:17:11.696 "is_configured": true, 00:17:11.696 "data_offset": 2048, 00:17:11.696 "data_size": 63488 00:17:11.696 } 00:17:11.696 ] 00:17:11.696 }' 00:17:11.696 12:09:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:11.696 12:09:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:11.696 12:09:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:11.696 12:09:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:11.696 12:09:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 85046 00:17:11.696 12:09:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 85046 ']' 00:17:11.696 12:09:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 85046 00:17:11.696 12:09:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:17:11.696 12:09:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:11.696 12:09:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85046 00:17:11.696 killing process with pid 85046 00:17:11.696 Received shutdown signal, test time was about 60.000000 seconds 00:17:11.696 00:17:11.696 Latency(us) 00:17:11.696 [2024-11-19T12:09:15.073Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:11.696 [2024-11-19T12:09:15.073Z] =================================================================================================================== 00:17:11.696 [2024-11-19T12:09:15.073Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:11.696 12:09:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:11.696 12:09:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:11.696 12:09:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85046' 00:17:11.696 12:09:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 85046 00:17:11.696 [2024-11-19 12:09:15.026298] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:11.696 [2024-11-19 12:09:15.026432] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:11.696 12:09:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 85046 00:17:11.696 [2024-11-19 12:09:15.026506] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:11.696 [2024-11-19 12:09:15.026517] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:12.266 [2024-11-19 12:09:15.491894] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:13.205 12:09:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:17:13.205 00:17:13.205 real 0m26.606s 00:17:13.205 user 0m33.275s 00:17:13.205 sys 0m2.955s 00:17:13.205 12:09:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:13.205 12:09:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.205 ************************************ 00:17:13.205 END TEST raid5f_rebuild_test_sb 00:17:13.205 ************************************ 00:17:13.465 12:09:16 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:17:13.465 12:09:16 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:17:13.465 12:09:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:13.465 12:09:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:13.465 12:09:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:13.465 ************************************ 00:17:13.465 START TEST raid_state_function_test_sb_4k 00:17:13.465 ************************************ 00:17:13.465 12:09:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:17:13.465 12:09:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:13.465 12:09:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:17:13.465 12:09:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:13.465 12:09:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:13.465 12:09:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:13.465 12:09:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:13.465 12:09:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:13.465 12:09:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:13.465 12:09:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:13.465 12:09:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:13.465 12:09:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:13.465 12:09:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:13.465 12:09:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:13.465 12:09:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:13.465 12:09:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:13.465 12:09:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:13.465 12:09:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:13.465 12:09:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:13.465 12:09:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:13.465 12:09:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:13.465 12:09:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:13.465 12:09:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:13.465 12:09:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=85852 00:17:13.465 12:09:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:13.465 Process raid pid: 85852 00:17:13.465 12:09:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 85852' 00:17:13.465 12:09:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 85852 00:17:13.465 12:09:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 85852 ']' 00:17:13.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:13.465 12:09:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:13.465 12:09:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:13.465 12:09:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:13.465 12:09:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:13.465 12:09:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.465 [2024-11-19 12:09:16.701741] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:17:13.465 [2024-11-19 12:09:16.701847] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:13.724 [2024-11-19 12:09:16.880603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.724 [2024-11-19 12:09:16.999406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:13.983 [2024-11-19 12:09:17.199260] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:13.983 [2024-11-19 12:09:17.199368] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:14.245 12:09:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:14.245 12:09:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:17:14.245 12:09:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:14.245 12:09:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.245 12:09:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:14.245 [2024-11-19 12:09:17.523865] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:14.245 [2024-11-19 12:09:17.523982] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:14.245 [2024-11-19 12:09:17.524007] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:14.245 [2024-11-19 12:09:17.524019] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:14.245 12:09:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.245 12:09:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:14.245 12:09:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:14.245 12:09:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:14.245 12:09:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:14.245 12:09:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:14.245 12:09:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:14.245 12:09:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:14.245 12:09:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:14.245 12:09:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:14.245 12:09:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:14.245 12:09:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.245 12:09:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.245 12:09:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:14.245 12:09:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:14.245 12:09:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.245 12:09:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:14.245 "name": "Existed_Raid", 00:17:14.245 "uuid": "5e0de3d1-ee12-489d-bda6-ad8510b166c1", 00:17:14.245 "strip_size_kb": 0, 00:17:14.245 "state": "configuring", 00:17:14.245 "raid_level": "raid1", 00:17:14.245 "superblock": true, 00:17:14.245 "num_base_bdevs": 2, 00:17:14.245 "num_base_bdevs_discovered": 0, 00:17:14.245 "num_base_bdevs_operational": 2, 00:17:14.245 "base_bdevs_list": [ 00:17:14.245 { 00:17:14.245 "name": "BaseBdev1", 00:17:14.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.245 "is_configured": false, 00:17:14.245 "data_offset": 0, 00:17:14.245 "data_size": 0 00:17:14.245 }, 00:17:14.245 { 00:17:14.245 "name": "BaseBdev2", 00:17:14.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.245 "is_configured": false, 00:17:14.245 "data_offset": 0, 00:17:14.245 "data_size": 0 00:17:14.245 } 00:17:14.245 ] 00:17:14.245 }' 00:17:14.245 12:09:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:14.245 12:09:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:14.860 12:09:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:14.860 12:09:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.860 12:09:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:14.860 [2024-11-19 12:09:17.987124] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:14.860 [2024-11-19 12:09:17.987239] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:14.860 12:09:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.860 12:09:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:14.860 12:09:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.860 12:09:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:14.860 [2024-11-19 12:09:17.999118] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:14.860 [2024-11-19 12:09:17.999164] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:14.860 [2024-11-19 12:09:17.999190] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:14.860 [2024-11-19 12:09:17.999201] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:14.860 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.860 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:17:14.860 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.860 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:14.860 [2024-11-19 12:09:18.045252] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:14.860 BaseBdev1 00:17:14.860 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.860 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:14.860 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:14.860 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:14.860 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:17:14.860 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:14.860 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:14.860 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:14.860 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.860 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:14.860 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.860 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:14.860 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.860 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:14.860 [ 00:17:14.860 { 00:17:14.860 "name": "BaseBdev1", 00:17:14.860 "aliases": [ 00:17:14.860 "bcc7aac7-080d-4c91-836f-71e2bfb0fbbc" 00:17:14.860 ], 00:17:14.860 "product_name": "Malloc disk", 00:17:14.860 "block_size": 4096, 00:17:14.860 "num_blocks": 8192, 00:17:14.860 "uuid": "bcc7aac7-080d-4c91-836f-71e2bfb0fbbc", 00:17:14.860 "assigned_rate_limits": { 00:17:14.860 "rw_ios_per_sec": 0, 00:17:14.860 "rw_mbytes_per_sec": 0, 00:17:14.860 "r_mbytes_per_sec": 0, 00:17:14.860 "w_mbytes_per_sec": 0 00:17:14.860 }, 00:17:14.860 "claimed": true, 00:17:14.860 "claim_type": "exclusive_write", 00:17:14.860 "zoned": false, 00:17:14.860 "supported_io_types": { 00:17:14.860 "read": true, 00:17:14.860 "write": true, 00:17:14.860 "unmap": true, 00:17:14.860 "flush": true, 00:17:14.860 "reset": true, 00:17:14.860 "nvme_admin": false, 00:17:14.860 "nvme_io": false, 00:17:14.860 "nvme_io_md": false, 00:17:14.860 "write_zeroes": true, 00:17:14.860 "zcopy": true, 00:17:14.860 "get_zone_info": false, 00:17:14.860 "zone_management": false, 00:17:14.860 "zone_append": false, 00:17:14.860 "compare": false, 00:17:14.860 "compare_and_write": false, 00:17:14.860 "abort": true, 00:17:14.860 "seek_hole": false, 00:17:14.860 "seek_data": false, 00:17:14.860 "copy": true, 00:17:14.860 "nvme_iov_md": false 00:17:14.860 }, 00:17:14.860 "memory_domains": [ 00:17:14.860 { 00:17:14.860 "dma_device_id": "system", 00:17:14.860 "dma_device_type": 1 00:17:14.860 }, 00:17:14.860 { 00:17:14.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:14.860 "dma_device_type": 2 00:17:14.860 } 00:17:14.860 ], 00:17:14.860 "driver_specific": {} 00:17:14.860 } 00:17:14.860 ] 00:17:14.860 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.860 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:17:14.860 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:14.860 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:14.860 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:14.860 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:14.860 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:14.860 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:14.860 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:14.860 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:14.860 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:14.860 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:14.860 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.860 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:14.860 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.860 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:14.860 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.861 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:14.861 "name": "Existed_Raid", 00:17:14.861 "uuid": "95d11369-fba2-41ab-b05c-3d7edfbffd9c", 00:17:14.861 "strip_size_kb": 0, 00:17:14.861 "state": "configuring", 00:17:14.861 "raid_level": "raid1", 00:17:14.861 "superblock": true, 00:17:14.861 "num_base_bdevs": 2, 00:17:14.861 "num_base_bdevs_discovered": 1, 00:17:14.861 "num_base_bdevs_operational": 2, 00:17:14.861 "base_bdevs_list": [ 00:17:14.861 { 00:17:14.861 "name": "BaseBdev1", 00:17:14.861 "uuid": "bcc7aac7-080d-4c91-836f-71e2bfb0fbbc", 00:17:14.861 "is_configured": true, 00:17:14.861 "data_offset": 256, 00:17:14.861 "data_size": 7936 00:17:14.861 }, 00:17:14.861 { 00:17:14.861 "name": "BaseBdev2", 00:17:14.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.861 "is_configured": false, 00:17:14.861 "data_offset": 0, 00:17:14.861 "data_size": 0 00:17:14.861 } 00:17:14.861 ] 00:17:14.861 }' 00:17:14.861 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:14.861 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:15.120 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:15.120 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.120 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:15.120 [2024-11-19 12:09:18.488548] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:15.120 [2024-11-19 12:09:18.488684] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:15.120 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.120 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:15.120 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.120 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:15.379 [2024-11-19 12:09:18.500580] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:15.379 [2024-11-19 12:09:18.502574] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:15.379 [2024-11-19 12:09:18.502668] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:15.379 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.379 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:15.379 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:15.379 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:15.379 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:15.379 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:15.379 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:15.379 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:15.379 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:15.379 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:15.379 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:15.379 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:15.379 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:15.379 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:15.379 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.379 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.379 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:15.379 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.379 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:15.379 "name": "Existed_Raid", 00:17:15.379 "uuid": "70d33575-d4bb-489d-b42d-e00c632971e6", 00:17:15.379 "strip_size_kb": 0, 00:17:15.379 "state": "configuring", 00:17:15.379 "raid_level": "raid1", 00:17:15.379 "superblock": true, 00:17:15.379 "num_base_bdevs": 2, 00:17:15.379 "num_base_bdevs_discovered": 1, 00:17:15.379 "num_base_bdevs_operational": 2, 00:17:15.379 "base_bdevs_list": [ 00:17:15.379 { 00:17:15.379 "name": "BaseBdev1", 00:17:15.379 "uuid": "bcc7aac7-080d-4c91-836f-71e2bfb0fbbc", 00:17:15.379 "is_configured": true, 00:17:15.379 "data_offset": 256, 00:17:15.379 "data_size": 7936 00:17:15.379 }, 00:17:15.379 { 00:17:15.379 "name": "BaseBdev2", 00:17:15.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.379 "is_configured": false, 00:17:15.379 "data_offset": 0, 00:17:15.379 "data_size": 0 00:17:15.379 } 00:17:15.379 ] 00:17:15.379 }' 00:17:15.379 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:15.379 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:15.638 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:17:15.638 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.638 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:15.638 [2024-11-19 12:09:18.977767] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:15.638 [2024-11-19 12:09:18.978163] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:15.638 [2024-11-19 12:09:18.978216] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:15.638 [2024-11-19 12:09:18.978512] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:15.638 BaseBdev2 00:17:15.638 [2024-11-19 12:09:18.978708] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:15.638 [2024-11-19 12:09:18.978760] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:15.638 [2024-11-19 12:09:18.978940] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:15.638 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.638 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:15.638 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:15.638 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:15.638 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:17:15.638 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:15.638 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:15.638 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:15.638 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.638 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:15.638 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.638 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:15.638 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.638 12:09:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:15.638 [ 00:17:15.638 { 00:17:15.638 "name": "BaseBdev2", 00:17:15.638 "aliases": [ 00:17:15.638 "e7e93f5e-3993-4e0f-9865-b01da6af20d4" 00:17:15.638 ], 00:17:15.638 "product_name": "Malloc disk", 00:17:15.638 "block_size": 4096, 00:17:15.638 "num_blocks": 8192, 00:17:15.638 "uuid": "e7e93f5e-3993-4e0f-9865-b01da6af20d4", 00:17:15.638 "assigned_rate_limits": { 00:17:15.638 "rw_ios_per_sec": 0, 00:17:15.638 "rw_mbytes_per_sec": 0, 00:17:15.638 "r_mbytes_per_sec": 0, 00:17:15.638 "w_mbytes_per_sec": 0 00:17:15.638 }, 00:17:15.638 "claimed": true, 00:17:15.638 "claim_type": "exclusive_write", 00:17:15.638 "zoned": false, 00:17:15.638 "supported_io_types": { 00:17:15.638 "read": true, 00:17:15.638 "write": true, 00:17:15.638 "unmap": true, 00:17:15.638 "flush": true, 00:17:15.638 "reset": true, 00:17:15.638 "nvme_admin": false, 00:17:15.638 "nvme_io": false, 00:17:15.638 "nvme_io_md": false, 00:17:15.638 "write_zeroes": true, 00:17:15.638 "zcopy": true, 00:17:15.638 "get_zone_info": false, 00:17:15.638 "zone_management": false, 00:17:15.638 "zone_append": false, 00:17:15.638 "compare": false, 00:17:15.638 "compare_and_write": false, 00:17:15.638 "abort": true, 00:17:15.638 "seek_hole": false, 00:17:15.638 "seek_data": false, 00:17:15.638 "copy": true, 00:17:15.638 "nvme_iov_md": false 00:17:15.638 }, 00:17:15.638 "memory_domains": [ 00:17:15.638 { 00:17:15.897 "dma_device_id": "system", 00:17:15.897 "dma_device_type": 1 00:17:15.897 }, 00:17:15.897 { 00:17:15.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:15.897 "dma_device_type": 2 00:17:15.897 } 00:17:15.897 ], 00:17:15.898 "driver_specific": {} 00:17:15.898 } 00:17:15.898 ] 00:17:15.898 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.898 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:17:15.898 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:15.898 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:15.898 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:15.898 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:15.898 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:15.898 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:15.898 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:15.898 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:15.898 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:15.898 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:15.898 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:15.898 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:15.898 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.898 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.898 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:15.898 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:15.898 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.898 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:15.898 "name": "Existed_Raid", 00:17:15.898 "uuid": "70d33575-d4bb-489d-b42d-e00c632971e6", 00:17:15.898 "strip_size_kb": 0, 00:17:15.898 "state": "online", 00:17:15.898 "raid_level": "raid1", 00:17:15.898 "superblock": true, 00:17:15.898 "num_base_bdevs": 2, 00:17:15.898 "num_base_bdevs_discovered": 2, 00:17:15.898 "num_base_bdevs_operational": 2, 00:17:15.898 "base_bdevs_list": [ 00:17:15.898 { 00:17:15.898 "name": "BaseBdev1", 00:17:15.898 "uuid": "bcc7aac7-080d-4c91-836f-71e2bfb0fbbc", 00:17:15.898 "is_configured": true, 00:17:15.898 "data_offset": 256, 00:17:15.898 "data_size": 7936 00:17:15.898 }, 00:17:15.898 { 00:17:15.898 "name": "BaseBdev2", 00:17:15.898 "uuid": "e7e93f5e-3993-4e0f-9865-b01da6af20d4", 00:17:15.898 "is_configured": true, 00:17:15.898 "data_offset": 256, 00:17:15.898 "data_size": 7936 00:17:15.898 } 00:17:15.898 ] 00:17:15.898 }' 00:17:15.898 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:15.898 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.156 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:16.156 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:16.156 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:16.156 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:16.156 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:16.156 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:16.156 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:16.156 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.156 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.156 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:16.156 [2024-11-19 12:09:19.457271] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:16.156 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.156 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:16.156 "name": "Existed_Raid", 00:17:16.156 "aliases": [ 00:17:16.156 "70d33575-d4bb-489d-b42d-e00c632971e6" 00:17:16.156 ], 00:17:16.156 "product_name": "Raid Volume", 00:17:16.156 "block_size": 4096, 00:17:16.156 "num_blocks": 7936, 00:17:16.156 "uuid": "70d33575-d4bb-489d-b42d-e00c632971e6", 00:17:16.156 "assigned_rate_limits": { 00:17:16.156 "rw_ios_per_sec": 0, 00:17:16.156 "rw_mbytes_per_sec": 0, 00:17:16.156 "r_mbytes_per_sec": 0, 00:17:16.156 "w_mbytes_per_sec": 0 00:17:16.156 }, 00:17:16.156 "claimed": false, 00:17:16.156 "zoned": false, 00:17:16.156 "supported_io_types": { 00:17:16.156 "read": true, 00:17:16.156 "write": true, 00:17:16.156 "unmap": false, 00:17:16.156 "flush": false, 00:17:16.156 "reset": true, 00:17:16.156 "nvme_admin": false, 00:17:16.156 "nvme_io": false, 00:17:16.156 "nvme_io_md": false, 00:17:16.156 "write_zeroes": true, 00:17:16.156 "zcopy": false, 00:17:16.156 "get_zone_info": false, 00:17:16.156 "zone_management": false, 00:17:16.156 "zone_append": false, 00:17:16.156 "compare": false, 00:17:16.156 "compare_and_write": false, 00:17:16.156 "abort": false, 00:17:16.156 "seek_hole": false, 00:17:16.156 "seek_data": false, 00:17:16.156 "copy": false, 00:17:16.156 "nvme_iov_md": false 00:17:16.156 }, 00:17:16.156 "memory_domains": [ 00:17:16.156 { 00:17:16.156 "dma_device_id": "system", 00:17:16.156 "dma_device_type": 1 00:17:16.156 }, 00:17:16.156 { 00:17:16.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:16.156 "dma_device_type": 2 00:17:16.156 }, 00:17:16.156 { 00:17:16.156 "dma_device_id": "system", 00:17:16.156 "dma_device_type": 1 00:17:16.156 }, 00:17:16.156 { 00:17:16.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:16.156 "dma_device_type": 2 00:17:16.156 } 00:17:16.156 ], 00:17:16.156 "driver_specific": { 00:17:16.156 "raid": { 00:17:16.156 "uuid": "70d33575-d4bb-489d-b42d-e00c632971e6", 00:17:16.156 "strip_size_kb": 0, 00:17:16.156 "state": "online", 00:17:16.156 "raid_level": "raid1", 00:17:16.157 "superblock": true, 00:17:16.157 "num_base_bdevs": 2, 00:17:16.157 "num_base_bdevs_discovered": 2, 00:17:16.157 "num_base_bdevs_operational": 2, 00:17:16.157 "base_bdevs_list": [ 00:17:16.157 { 00:17:16.157 "name": "BaseBdev1", 00:17:16.157 "uuid": "bcc7aac7-080d-4c91-836f-71e2bfb0fbbc", 00:17:16.157 "is_configured": true, 00:17:16.157 "data_offset": 256, 00:17:16.157 "data_size": 7936 00:17:16.157 }, 00:17:16.157 { 00:17:16.157 "name": "BaseBdev2", 00:17:16.157 "uuid": "e7e93f5e-3993-4e0f-9865-b01da6af20d4", 00:17:16.157 "is_configured": true, 00:17:16.157 "data_offset": 256, 00:17:16.157 "data_size": 7936 00:17:16.157 } 00:17:16.157 ] 00:17:16.157 } 00:17:16.157 } 00:17:16.157 }' 00:17:16.157 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:16.157 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:16.157 BaseBdev2' 00:17:16.157 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:16.415 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:16.415 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:16.415 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:16.415 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:16.415 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.415 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.415 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.415 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:16.415 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:16.415 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:16.415 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:16.415 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.415 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.415 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:16.415 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.415 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:16.415 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:16.415 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:16.415 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.415 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.415 [2024-11-19 12:09:19.684639] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:16.415 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.415 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:16.415 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:16.415 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:16.415 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:17:16.415 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:16.415 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:16.415 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:16.415 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:16.415 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:16.415 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:16.415 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:16.415 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:16.415 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:16.415 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:16.415 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:16.415 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.675 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:16.675 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.675 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.675 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.675 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:16.675 "name": "Existed_Raid", 00:17:16.675 "uuid": "70d33575-d4bb-489d-b42d-e00c632971e6", 00:17:16.675 "strip_size_kb": 0, 00:17:16.675 "state": "online", 00:17:16.675 "raid_level": "raid1", 00:17:16.675 "superblock": true, 00:17:16.675 "num_base_bdevs": 2, 00:17:16.675 "num_base_bdevs_discovered": 1, 00:17:16.675 "num_base_bdevs_operational": 1, 00:17:16.675 "base_bdevs_list": [ 00:17:16.675 { 00:17:16.675 "name": null, 00:17:16.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:16.675 "is_configured": false, 00:17:16.675 "data_offset": 0, 00:17:16.675 "data_size": 7936 00:17:16.675 }, 00:17:16.675 { 00:17:16.675 "name": "BaseBdev2", 00:17:16.675 "uuid": "e7e93f5e-3993-4e0f-9865-b01da6af20d4", 00:17:16.675 "is_configured": true, 00:17:16.675 "data_offset": 256, 00:17:16.675 "data_size": 7936 00:17:16.675 } 00:17:16.675 ] 00:17:16.675 }' 00:17:16.675 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:16.675 12:09:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.934 12:09:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:16.934 12:09:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:16.934 12:09:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:16.934 12:09:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.934 12:09:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.934 12:09:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.934 12:09:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.934 12:09:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:16.934 12:09:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:16.934 12:09:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:16.934 12:09:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.934 12:09:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.934 [2024-11-19 12:09:20.255285] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:16.934 [2024-11-19 12:09:20.255385] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:17.193 [2024-11-19 12:09:20.346461] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:17.193 [2024-11-19 12:09:20.346513] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:17.193 [2024-11-19 12:09:20.346525] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:17.193 12:09:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.193 12:09:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:17.193 12:09:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:17.193 12:09:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:17.193 12:09:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.193 12:09:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.193 12:09:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.193 12:09:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.193 12:09:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:17.193 12:09:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:17.193 12:09:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:17:17.193 12:09:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 85852 00:17:17.193 12:09:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 85852 ']' 00:17:17.193 12:09:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 85852 00:17:17.193 12:09:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:17:17.193 12:09:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:17.193 12:09:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85852 00:17:17.193 killing process with pid 85852 00:17:17.193 12:09:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:17.193 12:09:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:17.193 12:09:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85852' 00:17:17.193 12:09:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 85852 00:17:17.193 [2024-11-19 12:09:20.440703] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:17.193 12:09:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 85852 00:17:17.193 [2024-11-19 12:09:20.456846] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:18.131 12:09:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:17:18.131 00:17:18.131 real 0m4.895s 00:17:18.131 user 0m7.072s 00:17:18.131 sys 0m0.837s 00:17:18.131 12:09:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:18.131 12:09:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.131 ************************************ 00:17:18.131 END TEST raid_state_function_test_sb_4k 00:17:18.131 ************************************ 00:17:18.389 12:09:21 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:17:18.389 12:09:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:18.389 12:09:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:18.389 12:09:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:18.389 ************************************ 00:17:18.389 START TEST raid_superblock_test_4k 00:17:18.389 ************************************ 00:17:18.389 12:09:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:17:18.389 12:09:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:18.389 12:09:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:18.389 12:09:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:18.389 12:09:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:18.389 12:09:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:18.389 12:09:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:18.389 12:09:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:18.389 12:09:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:18.389 12:09:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:18.389 12:09:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:18.389 12:09:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:18.389 12:09:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:18.389 12:09:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:18.389 12:09:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:18.389 12:09:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:18.389 12:09:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86097 00:17:18.389 12:09:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:18.389 12:09:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86097 00:17:18.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:18.389 12:09:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 86097 ']' 00:17:18.389 12:09:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:18.390 12:09:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:18.390 12:09:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:18.390 12:09:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:18.390 12:09:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.390 [2024-11-19 12:09:21.669828] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:17:18.390 [2024-11-19 12:09:21.670050] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86097 ] 00:17:18.648 [2024-11-19 12:09:21.847444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.648 [2024-11-19 12:09:21.955944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:18.908 [2024-11-19 12:09:22.152925] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:18.908 [2024-11-19 12:09:22.153071] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:19.168 12:09:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:19.168 12:09:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:17:19.168 12:09:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:19.168 12:09:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:19.168 12:09:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:19.168 12:09:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:19.168 12:09:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:19.168 12:09:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:19.168 12:09:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:19.168 12:09:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:19.168 12:09:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:17:19.168 12:09:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.168 12:09:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.168 malloc1 00:17:19.168 12:09:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.168 12:09:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:19.168 12:09:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.168 12:09:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.168 [2024-11-19 12:09:22.521000] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:19.168 [2024-11-19 12:09:22.521125] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:19.168 [2024-11-19 12:09:22.521166] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:19.168 [2024-11-19 12:09:22.521194] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:19.168 [2024-11-19 12:09:22.523259] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:19.168 [2024-11-19 12:09:22.523327] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:19.168 pt1 00:17:19.168 12:09:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.169 12:09:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:19.169 12:09:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:19.169 12:09:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:19.169 12:09:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:19.169 12:09:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:19.169 12:09:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:19.169 12:09:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:19.169 12:09:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:19.169 12:09:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:17:19.169 12:09:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.169 12:09:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.428 malloc2 00:17:19.428 12:09:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.428 12:09:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:19.428 12:09:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.428 12:09:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.428 [2024-11-19 12:09:22.578183] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:19.428 [2024-11-19 12:09:22.578272] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:19.428 [2024-11-19 12:09:22.578325] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:19.428 [2024-11-19 12:09:22.578361] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:19.428 [2024-11-19 12:09:22.580467] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:19.428 [2024-11-19 12:09:22.580543] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:19.428 pt2 00:17:19.428 12:09:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.428 12:09:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:19.428 12:09:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:19.428 12:09:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:17:19.428 12:09:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.428 12:09:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.428 [2024-11-19 12:09:22.590219] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:19.428 [2024-11-19 12:09:22.591979] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:19.428 [2024-11-19 12:09:22.592210] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:19.428 [2024-11-19 12:09:22.592262] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:19.428 [2024-11-19 12:09:22.592503] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:19.428 [2024-11-19 12:09:22.592685] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:19.429 [2024-11-19 12:09:22.592730] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:19.429 [2024-11-19 12:09:22.592917] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:19.429 12:09:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.429 12:09:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:19.429 12:09:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:19.429 12:09:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:19.429 12:09:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:19.429 12:09:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:19.429 12:09:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:19.429 12:09:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:19.429 12:09:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:19.429 12:09:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:19.429 12:09:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:19.429 12:09:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.429 12:09:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.429 12:09:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.429 12:09:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.429 12:09:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.429 12:09:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:19.429 "name": "raid_bdev1", 00:17:19.429 "uuid": "d3eccdca-3a7b-4696-a3ba-5a32d13a6489", 00:17:19.429 "strip_size_kb": 0, 00:17:19.429 "state": "online", 00:17:19.429 "raid_level": "raid1", 00:17:19.429 "superblock": true, 00:17:19.429 "num_base_bdevs": 2, 00:17:19.429 "num_base_bdevs_discovered": 2, 00:17:19.429 "num_base_bdevs_operational": 2, 00:17:19.429 "base_bdevs_list": [ 00:17:19.429 { 00:17:19.429 "name": "pt1", 00:17:19.429 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:19.429 "is_configured": true, 00:17:19.429 "data_offset": 256, 00:17:19.429 "data_size": 7936 00:17:19.429 }, 00:17:19.429 { 00:17:19.429 "name": "pt2", 00:17:19.429 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:19.429 "is_configured": true, 00:17:19.429 "data_offset": 256, 00:17:19.429 "data_size": 7936 00:17:19.429 } 00:17:19.429 ] 00:17:19.429 }' 00:17:19.429 12:09:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:19.429 12:09:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.688 12:09:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:19.688 12:09:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:19.688 12:09:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:19.688 12:09:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:19.688 12:09:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:19.688 12:09:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:19.688 12:09:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:19.688 12:09:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:19.688 12:09:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.688 12:09:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.688 [2024-11-19 12:09:23.053674] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:19.948 12:09:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.949 12:09:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:19.949 "name": "raid_bdev1", 00:17:19.949 "aliases": [ 00:17:19.949 "d3eccdca-3a7b-4696-a3ba-5a32d13a6489" 00:17:19.949 ], 00:17:19.949 "product_name": "Raid Volume", 00:17:19.949 "block_size": 4096, 00:17:19.949 "num_blocks": 7936, 00:17:19.949 "uuid": "d3eccdca-3a7b-4696-a3ba-5a32d13a6489", 00:17:19.949 "assigned_rate_limits": { 00:17:19.949 "rw_ios_per_sec": 0, 00:17:19.949 "rw_mbytes_per_sec": 0, 00:17:19.949 "r_mbytes_per_sec": 0, 00:17:19.949 "w_mbytes_per_sec": 0 00:17:19.949 }, 00:17:19.949 "claimed": false, 00:17:19.949 "zoned": false, 00:17:19.949 "supported_io_types": { 00:17:19.949 "read": true, 00:17:19.949 "write": true, 00:17:19.949 "unmap": false, 00:17:19.949 "flush": false, 00:17:19.949 "reset": true, 00:17:19.949 "nvme_admin": false, 00:17:19.949 "nvme_io": false, 00:17:19.949 "nvme_io_md": false, 00:17:19.949 "write_zeroes": true, 00:17:19.949 "zcopy": false, 00:17:19.949 "get_zone_info": false, 00:17:19.949 "zone_management": false, 00:17:19.949 "zone_append": false, 00:17:19.949 "compare": false, 00:17:19.949 "compare_and_write": false, 00:17:19.949 "abort": false, 00:17:19.949 "seek_hole": false, 00:17:19.949 "seek_data": false, 00:17:19.949 "copy": false, 00:17:19.949 "nvme_iov_md": false 00:17:19.949 }, 00:17:19.949 "memory_domains": [ 00:17:19.949 { 00:17:19.949 "dma_device_id": "system", 00:17:19.949 "dma_device_type": 1 00:17:19.949 }, 00:17:19.949 { 00:17:19.949 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:19.949 "dma_device_type": 2 00:17:19.949 }, 00:17:19.949 { 00:17:19.949 "dma_device_id": "system", 00:17:19.949 "dma_device_type": 1 00:17:19.949 }, 00:17:19.949 { 00:17:19.949 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:19.949 "dma_device_type": 2 00:17:19.949 } 00:17:19.949 ], 00:17:19.949 "driver_specific": { 00:17:19.949 "raid": { 00:17:19.949 "uuid": "d3eccdca-3a7b-4696-a3ba-5a32d13a6489", 00:17:19.949 "strip_size_kb": 0, 00:17:19.949 "state": "online", 00:17:19.949 "raid_level": "raid1", 00:17:19.949 "superblock": true, 00:17:19.949 "num_base_bdevs": 2, 00:17:19.949 "num_base_bdevs_discovered": 2, 00:17:19.949 "num_base_bdevs_operational": 2, 00:17:19.949 "base_bdevs_list": [ 00:17:19.949 { 00:17:19.949 "name": "pt1", 00:17:19.949 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:19.949 "is_configured": true, 00:17:19.949 "data_offset": 256, 00:17:19.949 "data_size": 7936 00:17:19.949 }, 00:17:19.949 { 00:17:19.949 "name": "pt2", 00:17:19.949 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:19.949 "is_configured": true, 00:17:19.949 "data_offset": 256, 00:17:19.949 "data_size": 7936 00:17:19.949 } 00:17:19.949 ] 00:17:19.949 } 00:17:19.949 } 00:17:19.949 }' 00:17:19.949 12:09:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:19.949 12:09:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:19.949 pt2' 00:17:19.949 12:09:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:19.949 12:09:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:19.949 12:09:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:19.949 12:09:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:19.949 12:09:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:19.949 12:09:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.949 12:09:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.949 12:09:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.949 12:09:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:19.949 12:09:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:19.949 12:09:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:19.949 12:09:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:19.949 12:09:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.949 12:09:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.949 12:09:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:19.949 12:09:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.949 12:09:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:19.949 12:09:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:19.949 12:09:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:19.949 12:09:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.949 12:09:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.949 12:09:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:19.949 [2024-11-19 12:09:23.273258] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:19.949 12:09:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.949 12:09:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d3eccdca-3a7b-4696-a3ba-5a32d13a6489 00:17:19.949 12:09:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z d3eccdca-3a7b-4696-a3ba-5a32d13a6489 ']' 00:17:19.949 12:09:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:19.949 12:09:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.949 12:09:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.210 [2024-11-19 12:09:23.320913] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:20.210 [2024-11-19 12:09:23.320935] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:20.210 [2024-11-19 12:09:23.321010] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:20.210 [2024-11-19 12:09:23.321076] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:20.210 [2024-11-19 12:09:23.321090] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:20.210 12:09:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.210 12:09:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:20.210 12:09:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.210 12:09:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.210 12:09:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.210 12:09:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.210 12:09:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:20.210 12:09:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:20.210 12:09:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:20.210 12:09:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:20.210 12:09:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.210 12:09:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.210 12:09:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.210 12:09:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:20.210 12:09:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:20.210 12:09:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.210 12:09:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.210 12:09:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.210 12:09:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:20.210 12:09:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.210 12:09:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.210 12:09:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:20.210 12:09:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.210 12:09:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:20.210 12:09:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:20.210 12:09:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:17:20.210 12:09:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:20.210 12:09:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:20.211 12:09:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:20.211 12:09:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:20.211 12:09:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:20.211 12:09:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:20.211 12:09:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.211 12:09:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.211 [2024-11-19 12:09:23.452712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:20.211 [2024-11-19 12:09:23.454493] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:20.211 [2024-11-19 12:09:23.454557] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:20.211 [2024-11-19 12:09:23.454607] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:20.211 [2024-11-19 12:09:23.454620] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:20.211 [2024-11-19 12:09:23.454629] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:20.211 request: 00:17:20.211 { 00:17:20.211 "name": "raid_bdev1", 00:17:20.211 "raid_level": "raid1", 00:17:20.211 "base_bdevs": [ 00:17:20.211 "malloc1", 00:17:20.211 "malloc2" 00:17:20.211 ], 00:17:20.211 "superblock": false, 00:17:20.211 "method": "bdev_raid_create", 00:17:20.211 "req_id": 1 00:17:20.211 } 00:17:20.211 Got JSON-RPC error response 00:17:20.211 response: 00:17:20.211 { 00:17:20.211 "code": -17, 00:17:20.211 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:20.211 } 00:17:20.211 12:09:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:20.211 12:09:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:17:20.211 12:09:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:20.211 12:09:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:20.211 12:09:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:20.211 12:09:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.211 12:09:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:20.211 12:09:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.211 12:09:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.211 12:09:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.211 12:09:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:20.211 12:09:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:20.211 12:09:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:20.211 12:09:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.211 12:09:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.211 [2024-11-19 12:09:23.516586] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:20.211 [2024-11-19 12:09:23.516671] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:20.211 [2024-11-19 12:09:23.516719] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:20.211 [2024-11-19 12:09:23.516748] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:20.211 [2024-11-19 12:09:23.518851] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:20.211 [2024-11-19 12:09:23.518933] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:20.211 [2024-11-19 12:09:23.519033] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:20.211 [2024-11-19 12:09:23.519121] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:20.211 pt1 00:17:20.211 12:09:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.211 12:09:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:20.211 12:09:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:20.211 12:09:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:20.211 12:09:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:20.211 12:09:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:20.211 12:09:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:20.211 12:09:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:20.211 12:09:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:20.211 12:09:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:20.211 12:09:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:20.211 12:09:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.211 12:09:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.211 12:09:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.211 12:09:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.211 12:09:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.211 12:09:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:20.211 "name": "raid_bdev1", 00:17:20.211 "uuid": "d3eccdca-3a7b-4696-a3ba-5a32d13a6489", 00:17:20.211 "strip_size_kb": 0, 00:17:20.211 "state": "configuring", 00:17:20.211 "raid_level": "raid1", 00:17:20.211 "superblock": true, 00:17:20.211 "num_base_bdevs": 2, 00:17:20.211 "num_base_bdevs_discovered": 1, 00:17:20.211 "num_base_bdevs_operational": 2, 00:17:20.211 "base_bdevs_list": [ 00:17:20.211 { 00:17:20.211 "name": "pt1", 00:17:20.211 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:20.211 "is_configured": true, 00:17:20.211 "data_offset": 256, 00:17:20.211 "data_size": 7936 00:17:20.211 }, 00:17:20.211 { 00:17:20.211 "name": null, 00:17:20.211 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:20.211 "is_configured": false, 00:17:20.211 "data_offset": 256, 00:17:20.211 "data_size": 7936 00:17:20.211 } 00:17:20.211 ] 00:17:20.211 }' 00:17:20.211 12:09:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:20.211 12:09:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.782 12:09:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:17:20.782 12:09:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:20.782 12:09:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:20.782 12:09:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:20.782 12:09:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.782 12:09:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.782 [2024-11-19 12:09:23.991806] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:20.782 [2024-11-19 12:09:23.991934] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:20.782 [2024-11-19 12:09:23.991975] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:20.782 [2024-11-19 12:09:23.992018] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:20.782 [2024-11-19 12:09:23.992486] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:20.782 [2024-11-19 12:09:23.992554] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:20.782 [2024-11-19 12:09:23.992667] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:20.782 [2024-11-19 12:09:23.992699] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:20.782 [2024-11-19 12:09:23.992822] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:20.782 [2024-11-19 12:09:23.992833] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:20.782 [2024-11-19 12:09:23.993080] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:20.782 [2024-11-19 12:09:23.993234] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:20.782 [2024-11-19 12:09:23.993245] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:20.782 [2024-11-19 12:09:23.993388] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:20.782 pt2 00:17:20.782 12:09:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.782 12:09:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:20.782 12:09:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:20.782 12:09:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:20.782 12:09:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:20.782 12:09:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:20.782 12:09:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:20.782 12:09:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:20.782 12:09:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:20.782 12:09:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:20.782 12:09:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:20.782 12:09:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:20.782 12:09:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:20.782 12:09:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.782 12:09:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.782 12:09:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.782 12:09:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.782 12:09:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.782 12:09:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:20.782 "name": "raid_bdev1", 00:17:20.782 "uuid": "d3eccdca-3a7b-4696-a3ba-5a32d13a6489", 00:17:20.782 "strip_size_kb": 0, 00:17:20.782 "state": "online", 00:17:20.782 "raid_level": "raid1", 00:17:20.782 "superblock": true, 00:17:20.782 "num_base_bdevs": 2, 00:17:20.782 "num_base_bdevs_discovered": 2, 00:17:20.782 "num_base_bdevs_operational": 2, 00:17:20.782 "base_bdevs_list": [ 00:17:20.782 { 00:17:20.782 "name": "pt1", 00:17:20.782 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:20.782 "is_configured": true, 00:17:20.782 "data_offset": 256, 00:17:20.782 "data_size": 7936 00:17:20.782 }, 00:17:20.782 { 00:17:20.782 "name": "pt2", 00:17:20.782 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:20.782 "is_configured": true, 00:17:20.782 "data_offset": 256, 00:17:20.782 "data_size": 7936 00:17:20.782 } 00:17:20.782 ] 00:17:20.782 }' 00:17:20.782 12:09:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:20.782 12:09:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:21.041 12:09:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:21.301 12:09:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:21.301 12:09:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:21.301 12:09:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:21.301 12:09:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:21.301 12:09:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:21.301 12:09:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:21.301 12:09:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:21.301 12:09:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.301 12:09:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:21.301 [2024-11-19 12:09:24.431320] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:21.301 12:09:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.301 12:09:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:21.301 "name": "raid_bdev1", 00:17:21.301 "aliases": [ 00:17:21.301 "d3eccdca-3a7b-4696-a3ba-5a32d13a6489" 00:17:21.301 ], 00:17:21.301 "product_name": "Raid Volume", 00:17:21.301 "block_size": 4096, 00:17:21.301 "num_blocks": 7936, 00:17:21.301 "uuid": "d3eccdca-3a7b-4696-a3ba-5a32d13a6489", 00:17:21.301 "assigned_rate_limits": { 00:17:21.301 "rw_ios_per_sec": 0, 00:17:21.301 "rw_mbytes_per_sec": 0, 00:17:21.302 "r_mbytes_per_sec": 0, 00:17:21.302 "w_mbytes_per_sec": 0 00:17:21.302 }, 00:17:21.302 "claimed": false, 00:17:21.302 "zoned": false, 00:17:21.302 "supported_io_types": { 00:17:21.302 "read": true, 00:17:21.302 "write": true, 00:17:21.302 "unmap": false, 00:17:21.302 "flush": false, 00:17:21.302 "reset": true, 00:17:21.302 "nvme_admin": false, 00:17:21.302 "nvme_io": false, 00:17:21.302 "nvme_io_md": false, 00:17:21.302 "write_zeroes": true, 00:17:21.302 "zcopy": false, 00:17:21.302 "get_zone_info": false, 00:17:21.302 "zone_management": false, 00:17:21.302 "zone_append": false, 00:17:21.302 "compare": false, 00:17:21.302 "compare_and_write": false, 00:17:21.302 "abort": false, 00:17:21.302 "seek_hole": false, 00:17:21.302 "seek_data": false, 00:17:21.302 "copy": false, 00:17:21.302 "nvme_iov_md": false 00:17:21.302 }, 00:17:21.302 "memory_domains": [ 00:17:21.302 { 00:17:21.302 "dma_device_id": "system", 00:17:21.302 "dma_device_type": 1 00:17:21.302 }, 00:17:21.302 { 00:17:21.302 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:21.302 "dma_device_type": 2 00:17:21.302 }, 00:17:21.302 { 00:17:21.302 "dma_device_id": "system", 00:17:21.302 "dma_device_type": 1 00:17:21.302 }, 00:17:21.302 { 00:17:21.302 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:21.302 "dma_device_type": 2 00:17:21.302 } 00:17:21.302 ], 00:17:21.302 "driver_specific": { 00:17:21.302 "raid": { 00:17:21.302 "uuid": "d3eccdca-3a7b-4696-a3ba-5a32d13a6489", 00:17:21.302 "strip_size_kb": 0, 00:17:21.302 "state": "online", 00:17:21.302 "raid_level": "raid1", 00:17:21.302 "superblock": true, 00:17:21.302 "num_base_bdevs": 2, 00:17:21.302 "num_base_bdevs_discovered": 2, 00:17:21.302 "num_base_bdevs_operational": 2, 00:17:21.302 "base_bdevs_list": [ 00:17:21.302 { 00:17:21.302 "name": "pt1", 00:17:21.302 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:21.302 "is_configured": true, 00:17:21.302 "data_offset": 256, 00:17:21.302 "data_size": 7936 00:17:21.302 }, 00:17:21.302 { 00:17:21.302 "name": "pt2", 00:17:21.302 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:21.302 "is_configured": true, 00:17:21.302 "data_offset": 256, 00:17:21.302 "data_size": 7936 00:17:21.302 } 00:17:21.302 ] 00:17:21.302 } 00:17:21.302 } 00:17:21.302 }' 00:17:21.302 12:09:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:21.302 12:09:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:21.302 pt2' 00:17:21.302 12:09:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:21.302 12:09:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:21.302 12:09:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:21.302 12:09:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:21.302 12:09:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:21.302 12:09:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.302 12:09:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:21.302 12:09:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.302 12:09:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:21.302 12:09:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:21.302 12:09:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:21.302 12:09:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:21.302 12:09:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.302 12:09:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:21.302 12:09:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:21.302 12:09:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.302 12:09:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:21.302 12:09:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:21.302 12:09:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:21.302 12:09:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:21.302 12:09:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.302 12:09:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:21.302 [2024-11-19 12:09:24.658830] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:21.562 12:09:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.562 12:09:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' d3eccdca-3a7b-4696-a3ba-5a32d13a6489 '!=' d3eccdca-3a7b-4696-a3ba-5a32d13a6489 ']' 00:17:21.562 12:09:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:21.562 12:09:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:21.562 12:09:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:17:21.562 12:09:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:21.562 12:09:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.562 12:09:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:21.562 [2024-11-19 12:09:24.706575] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:21.562 12:09:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.562 12:09:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:21.562 12:09:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:21.562 12:09:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:21.562 12:09:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:21.562 12:09:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:21.562 12:09:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:21.562 12:09:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:21.562 12:09:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:21.562 12:09:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:21.562 12:09:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:21.562 12:09:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.562 12:09:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.562 12:09:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.562 12:09:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:21.562 12:09:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.562 12:09:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:21.562 "name": "raid_bdev1", 00:17:21.562 "uuid": "d3eccdca-3a7b-4696-a3ba-5a32d13a6489", 00:17:21.562 "strip_size_kb": 0, 00:17:21.562 "state": "online", 00:17:21.562 "raid_level": "raid1", 00:17:21.562 "superblock": true, 00:17:21.562 "num_base_bdevs": 2, 00:17:21.562 "num_base_bdevs_discovered": 1, 00:17:21.562 "num_base_bdevs_operational": 1, 00:17:21.562 "base_bdevs_list": [ 00:17:21.562 { 00:17:21.562 "name": null, 00:17:21.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.562 "is_configured": false, 00:17:21.562 "data_offset": 0, 00:17:21.562 "data_size": 7936 00:17:21.562 }, 00:17:21.562 { 00:17:21.562 "name": "pt2", 00:17:21.562 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:21.562 "is_configured": true, 00:17:21.562 "data_offset": 256, 00:17:21.562 "data_size": 7936 00:17:21.562 } 00:17:21.562 ] 00:17:21.562 }' 00:17:21.562 12:09:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:21.562 12:09:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:21.822 12:09:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:21.822 12:09:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.822 12:09:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:21.822 [2024-11-19 12:09:25.137835] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:21.822 [2024-11-19 12:09:25.137907] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:21.822 [2024-11-19 12:09:25.138017] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:21.822 [2024-11-19 12:09:25.138081] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:21.822 [2024-11-19 12:09:25.138130] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:21.822 12:09:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.822 12:09:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.822 12:09:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.822 12:09:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:21.822 12:09:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:21.822 12:09:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.822 12:09:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:21.822 12:09:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:21.822 12:09:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:21.822 12:09:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:21.822 12:09:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:21.822 12:09:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.822 12:09:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:22.082 12:09:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.082 12:09:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:22.082 12:09:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:22.082 12:09:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:22.082 12:09:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:22.082 12:09:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:17:22.082 12:09:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:22.082 12:09:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.082 12:09:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:22.082 [2024-11-19 12:09:25.213669] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:22.082 [2024-11-19 12:09:25.213782] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:22.082 [2024-11-19 12:09:25.213816] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:22.082 [2024-11-19 12:09:25.213845] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:22.082 [2024-11-19 12:09:25.215997] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:22.082 [2024-11-19 12:09:25.216088] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:22.082 [2024-11-19 12:09:25.216204] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:22.082 [2024-11-19 12:09:25.216272] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:22.082 [2024-11-19 12:09:25.216445] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:22.082 [2024-11-19 12:09:25.216486] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:22.082 [2024-11-19 12:09:25.216720] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:22.082 [2024-11-19 12:09:25.216905] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:22.082 [2024-11-19 12:09:25.216946] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:22.082 [2024-11-19 12:09:25.217129] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:22.082 pt2 00:17:22.082 12:09:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.082 12:09:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:22.082 12:09:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:22.082 12:09:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:22.082 12:09:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:22.082 12:09:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:22.082 12:09:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:22.082 12:09:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:22.082 12:09:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:22.082 12:09:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:22.082 12:09:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:22.082 12:09:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.082 12:09:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.082 12:09:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.082 12:09:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:22.082 12:09:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.082 12:09:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:22.082 "name": "raid_bdev1", 00:17:22.082 "uuid": "d3eccdca-3a7b-4696-a3ba-5a32d13a6489", 00:17:22.082 "strip_size_kb": 0, 00:17:22.082 "state": "online", 00:17:22.082 "raid_level": "raid1", 00:17:22.082 "superblock": true, 00:17:22.082 "num_base_bdevs": 2, 00:17:22.082 "num_base_bdevs_discovered": 1, 00:17:22.082 "num_base_bdevs_operational": 1, 00:17:22.082 "base_bdevs_list": [ 00:17:22.082 { 00:17:22.082 "name": null, 00:17:22.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.082 "is_configured": false, 00:17:22.082 "data_offset": 256, 00:17:22.082 "data_size": 7936 00:17:22.082 }, 00:17:22.082 { 00:17:22.082 "name": "pt2", 00:17:22.082 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:22.082 "is_configured": true, 00:17:22.082 "data_offset": 256, 00:17:22.082 "data_size": 7936 00:17:22.082 } 00:17:22.082 ] 00:17:22.082 }' 00:17:22.082 12:09:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:22.082 12:09:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:22.342 12:09:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:22.342 12:09:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.342 12:09:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:22.342 [2024-11-19 12:09:25.624962] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:22.342 [2024-11-19 12:09:25.624987] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:22.342 [2024-11-19 12:09:25.625122] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:22.342 [2024-11-19 12:09:25.625166] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:22.342 [2024-11-19 12:09:25.625175] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:22.342 12:09:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.342 12:09:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:22.342 12:09:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.342 12:09:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.342 12:09:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:22.342 12:09:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.342 12:09:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:22.342 12:09:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:22.342 12:09:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:17:22.342 12:09:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:22.342 12:09:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.342 12:09:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:22.342 [2024-11-19 12:09:25.684876] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:22.342 [2024-11-19 12:09:25.684963] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:22.342 [2024-11-19 12:09:25.685005] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:17:22.342 [2024-11-19 12:09:25.685033] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:22.342 [2024-11-19 12:09:25.687098] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:22.342 [2024-11-19 12:09:25.687172] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:22.342 [2024-11-19 12:09:25.687283] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:22.342 [2024-11-19 12:09:25.687356] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:22.342 [2024-11-19 12:09:25.687520] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:22.342 [2024-11-19 12:09:25.687572] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:22.342 [2024-11-19 12:09:25.687603] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:22.342 [2024-11-19 12:09:25.687699] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:22.342 [2024-11-19 12:09:25.687806] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:22.342 [2024-11-19 12:09:25.687844] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:22.342 [2024-11-19 12:09:25.688107] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:22.342 [2024-11-19 12:09:25.688281] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:22.342 [2024-11-19 12:09:25.688326] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:22.342 [2024-11-19 12:09:25.688489] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:22.342 pt1 00:17:22.342 12:09:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.342 12:09:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:17:22.342 12:09:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:22.342 12:09:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:22.342 12:09:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:22.342 12:09:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:22.342 12:09:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:22.342 12:09:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:22.342 12:09:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:22.342 12:09:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:22.342 12:09:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:22.342 12:09:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:22.343 12:09:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.343 12:09:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.343 12:09:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.343 12:09:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:22.343 12:09:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.602 12:09:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:22.602 "name": "raid_bdev1", 00:17:22.602 "uuid": "d3eccdca-3a7b-4696-a3ba-5a32d13a6489", 00:17:22.602 "strip_size_kb": 0, 00:17:22.602 "state": "online", 00:17:22.602 "raid_level": "raid1", 00:17:22.602 "superblock": true, 00:17:22.602 "num_base_bdevs": 2, 00:17:22.602 "num_base_bdevs_discovered": 1, 00:17:22.602 "num_base_bdevs_operational": 1, 00:17:22.602 "base_bdevs_list": [ 00:17:22.602 { 00:17:22.602 "name": null, 00:17:22.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.602 "is_configured": false, 00:17:22.602 "data_offset": 256, 00:17:22.602 "data_size": 7936 00:17:22.602 }, 00:17:22.602 { 00:17:22.602 "name": "pt2", 00:17:22.602 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:22.602 "is_configured": true, 00:17:22.602 "data_offset": 256, 00:17:22.602 "data_size": 7936 00:17:22.602 } 00:17:22.602 ] 00:17:22.602 }' 00:17:22.602 12:09:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:22.602 12:09:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:22.862 12:09:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:22.863 12:09:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:22.863 12:09:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.863 12:09:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:22.863 12:09:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.863 12:09:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:22.863 12:09:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:22.863 12:09:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.863 12:09:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:22.863 12:09:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:22.863 [2024-11-19 12:09:26.144332] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:22.863 12:09:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.863 12:09:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' d3eccdca-3a7b-4696-a3ba-5a32d13a6489 '!=' d3eccdca-3a7b-4696-a3ba-5a32d13a6489 ']' 00:17:22.863 12:09:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86097 00:17:22.863 12:09:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 86097 ']' 00:17:22.863 12:09:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 86097 00:17:22.863 12:09:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:17:22.863 12:09:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:22.863 12:09:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86097 00:17:22.863 12:09:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:22.863 12:09:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:22.863 12:09:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86097' 00:17:22.863 killing process with pid 86097 00:17:22.863 12:09:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 86097 00:17:22.863 [2024-11-19 12:09:26.229395] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:22.863 [2024-11-19 12:09:26.229523] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:22.863 [2024-11-19 12:09:26.229594] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:22.863 [2024-11-19 12:09:26.229641] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, sta 12:09:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 86097 00:17:22.863 te offline 00:17:23.122 [2024-11-19 12:09:26.425043] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:24.503 12:09:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:17:24.503 00:17:24.503 real 0m5.975s 00:17:24.503 user 0m8.993s 00:17:24.503 sys 0m1.077s 00:17:24.503 12:09:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:24.503 12:09:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.503 ************************************ 00:17:24.503 END TEST raid_superblock_test_4k 00:17:24.503 ************************************ 00:17:24.503 12:09:27 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:17:24.503 12:09:27 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:17:24.503 12:09:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:24.503 12:09:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:24.503 12:09:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:24.503 ************************************ 00:17:24.503 START TEST raid_rebuild_test_sb_4k 00:17:24.503 ************************************ 00:17:24.503 12:09:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:17:24.503 12:09:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:24.503 12:09:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:24.503 12:09:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:24.503 12:09:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:24.503 12:09:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:24.503 12:09:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:24.503 12:09:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:24.503 12:09:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:24.504 12:09:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:24.504 12:09:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:24.504 12:09:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:24.504 12:09:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:24.504 12:09:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:24.504 12:09:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:24.504 12:09:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:24.504 12:09:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:24.504 12:09:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:24.504 12:09:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:24.504 12:09:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:24.504 12:09:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:24.504 12:09:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:24.504 12:09:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:24.504 12:09:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:24.504 12:09:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:24.504 12:09:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=86422 00:17:24.504 12:09:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 86422 00:17:24.504 12:09:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:24.504 12:09:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86422 ']' 00:17:24.504 12:09:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:24.504 12:09:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:24.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:24.504 12:09:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:24.504 12:09:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:24.504 12:09:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.504 [2024-11-19 12:09:27.727164] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:17:24.504 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:24.504 Zero copy mechanism will not be used. 00:17:24.504 [2024-11-19 12:09:27.727413] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86422 ] 00:17:24.763 [2024-11-19 12:09:27.901630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:24.763 [2024-11-19 12:09:28.029593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:25.022 [2024-11-19 12:09:28.255582] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:25.022 [2024-11-19 12:09:28.255646] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:25.282 12:09:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:25.282 12:09:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:17:25.282 12:09:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:25.282 12:09:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:17:25.282 12:09:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.282 12:09:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:25.282 BaseBdev1_malloc 00:17:25.282 12:09:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.282 12:09:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:25.282 12:09:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.282 12:09:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:25.282 [2024-11-19 12:09:28.594903] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:25.282 [2024-11-19 12:09:28.595068] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:25.282 [2024-11-19 12:09:28.595098] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:25.282 [2024-11-19 12:09:28.595110] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:25.282 [2024-11-19 12:09:28.597380] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:25.282 [2024-11-19 12:09:28.597417] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:25.282 BaseBdev1 00:17:25.282 12:09:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.282 12:09:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:25.282 12:09:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:17:25.282 12:09:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.282 12:09:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:25.282 BaseBdev2_malloc 00:17:25.282 12:09:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.282 12:09:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:25.282 12:09:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.282 12:09:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:25.282 [2024-11-19 12:09:28.654390] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:25.282 [2024-11-19 12:09:28.654518] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:25.282 [2024-11-19 12:09:28.654541] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:25.282 [2024-11-19 12:09:28.654554] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:25.542 [2024-11-19 12:09:28.656837] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:25.542 [2024-11-19 12:09:28.656874] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:25.542 BaseBdev2 00:17:25.542 12:09:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.542 12:09:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:17:25.542 12:09:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.542 12:09:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:25.542 spare_malloc 00:17:25.542 12:09:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.542 12:09:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:25.542 12:09:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.542 12:09:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:25.542 spare_delay 00:17:25.542 12:09:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.542 12:09:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:25.542 12:09:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.542 12:09:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:25.542 [2024-11-19 12:09:28.757205] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:25.542 [2024-11-19 12:09:28.757318] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:25.542 [2024-11-19 12:09:28.757340] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:25.542 [2024-11-19 12:09:28.757352] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:25.542 [2024-11-19 12:09:28.759614] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:25.542 [2024-11-19 12:09:28.759653] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:25.542 spare 00:17:25.542 12:09:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.542 12:09:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:25.542 12:09:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.542 12:09:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:25.542 [2024-11-19 12:09:28.769251] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:25.542 [2024-11-19 12:09:28.771202] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:25.542 [2024-11-19 12:09:28.771382] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:25.542 [2024-11-19 12:09:28.771397] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:25.542 [2024-11-19 12:09:28.771617] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:25.542 [2024-11-19 12:09:28.771778] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:25.542 [2024-11-19 12:09:28.771786] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:25.542 [2024-11-19 12:09:28.771920] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:25.542 12:09:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.542 12:09:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:25.542 12:09:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:25.542 12:09:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:25.542 12:09:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:25.542 12:09:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:25.542 12:09:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:25.542 12:09:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:25.542 12:09:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:25.542 12:09:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:25.542 12:09:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:25.542 12:09:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.542 12:09:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.542 12:09:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.542 12:09:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:25.542 12:09:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.542 12:09:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:25.542 "name": "raid_bdev1", 00:17:25.542 "uuid": "aaa4a306-87f8-4597-bb98-c35371e2f6c5", 00:17:25.542 "strip_size_kb": 0, 00:17:25.542 "state": "online", 00:17:25.542 "raid_level": "raid1", 00:17:25.542 "superblock": true, 00:17:25.542 "num_base_bdevs": 2, 00:17:25.542 "num_base_bdevs_discovered": 2, 00:17:25.542 "num_base_bdevs_operational": 2, 00:17:25.542 "base_bdevs_list": [ 00:17:25.542 { 00:17:25.542 "name": "BaseBdev1", 00:17:25.542 "uuid": "2be75142-ba6f-52c6-8979-56c1bb0dd394", 00:17:25.542 "is_configured": true, 00:17:25.542 "data_offset": 256, 00:17:25.542 "data_size": 7936 00:17:25.542 }, 00:17:25.542 { 00:17:25.542 "name": "BaseBdev2", 00:17:25.542 "uuid": "a3060e8a-0cdc-52d4-bfc2-19f6bead7233", 00:17:25.542 "is_configured": true, 00:17:25.542 "data_offset": 256, 00:17:25.542 "data_size": 7936 00:17:25.542 } 00:17:25.542 ] 00:17:25.542 }' 00:17:25.542 12:09:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:25.542 12:09:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:26.111 12:09:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:26.111 12:09:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:26.111 12:09:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.111 12:09:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:26.111 [2024-11-19 12:09:29.228647] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:26.111 12:09:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.111 12:09:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:17:26.111 12:09:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:26.111 12:09:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.111 12:09:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.111 12:09:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:26.111 12:09:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.111 12:09:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:17:26.111 12:09:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:26.111 12:09:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:26.111 12:09:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:26.111 12:09:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:26.111 12:09:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:26.111 12:09:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:26.111 12:09:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:26.111 12:09:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:26.111 12:09:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:26.111 12:09:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:17:26.111 12:09:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:26.111 12:09:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:26.111 12:09:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:26.111 [2024-11-19 12:09:29.472032] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:26.372 /dev/nbd0 00:17:26.372 12:09:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:26.372 12:09:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:26.372 12:09:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:26.372 12:09:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:17:26.372 12:09:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:26.372 12:09:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:26.372 12:09:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:26.372 12:09:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:17:26.372 12:09:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:26.372 12:09:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:26.372 12:09:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:26.372 1+0 records in 00:17:26.372 1+0 records out 00:17:26.372 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000535183 s, 7.7 MB/s 00:17:26.372 12:09:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:26.372 12:09:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:17:26.372 12:09:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:26.372 12:09:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:26.372 12:09:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:17:26.372 12:09:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:26.372 12:09:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:26.372 12:09:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:17:26.372 12:09:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:17:26.372 12:09:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:17:26.953 7936+0 records in 00:17:26.953 7936+0 records out 00:17:26.953 32505856 bytes (33 MB, 31 MiB) copied, 0.556328 s, 58.4 MB/s 00:17:26.953 12:09:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:26.953 12:09:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:26.953 12:09:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:26.953 12:09:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:26.953 12:09:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:17:26.953 12:09:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:26.953 12:09:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:26.953 [2024-11-19 12:09:30.293541] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:26.953 12:09:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:26.953 12:09:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:26.953 12:09:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:26.953 12:09:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:26.953 12:09:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:26.953 12:09:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:27.228 12:09:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:27.228 12:09:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:27.228 12:09:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:27.228 12:09:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.228 12:09:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:27.228 [2024-11-19 12:09:30.328788] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:27.228 12:09:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.228 12:09:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:27.228 12:09:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:27.228 12:09:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:27.228 12:09:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:27.228 12:09:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:27.228 12:09:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:27.228 12:09:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:27.228 12:09:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:27.228 12:09:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:27.228 12:09:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:27.228 12:09:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.228 12:09:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.228 12:09:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.228 12:09:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:27.228 12:09:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.228 12:09:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:27.228 "name": "raid_bdev1", 00:17:27.228 "uuid": "aaa4a306-87f8-4597-bb98-c35371e2f6c5", 00:17:27.228 "strip_size_kb": 0, 00:17:27.228 "state": "online", 00:17:27.228 "raid_level": "raid1", 00:17:27.228 "superblock": true, 00:17:27.228 "num_base_bdevs": 2, 00:17:27.228 "num_base_bdevs_discovered": 1, 00:17:27.228 "num_base_bdevs_operational": 1, 00:17:27.228 "base_bdevs_list": [ 00:17:27.228 { 00:17:27.228 "name": null, 00:17:27.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.228 "is_configured": false, 00:17:27.228 "data_offset": 0, 00:17:27.228 "data_size": 7936 00:17:27.228 }, 00:17:27.228 { 00:17:27.228 "name": "BaseBdev2", 00:17:27.228 "uuid": "a3060e8a-0cdc-52d4-bfc2-19f6bead7233", 00:17:27.228 "is_configured": true, 00:17:27.228 "data_offset": 256, 00:17:27.228 "data_size": 7936 00:17:27.228 } 00:17:27.228 ] 00:17:27.228 }' 00:17:27.228 12:09:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:27.228 12:09:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:27.487 12:09:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:27.487 12:09:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.487 12:09:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:27.487 [2024-11-19 12:09:30.847942] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:27.745 [2024-11-19 12:09:30.864083] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:17:27.745 12:09:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.745 12:09:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:27.745 [2024-11-19 12:09:30.865857] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:28.685 12:09:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:28.685 12:09:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:28.685 12:09:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:28.685 12:09:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:28.685 12:09:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:28.685 12:09:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.685 12:09:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.685 12:09:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.685 12:09:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:28.685 12:09:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.685 12:09:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:28.685 "name": "raid_bdev1", 00:17:28.685 "uuid": "aaa4a306-87f8-4597-bb98-c35371e2f6c5", 00:17:28.685 "strip_size_kb": 0, 00:17:28.685 "state": "online", 00:17:28.685 "raid_level": "raid1", 00:17:28.685 "superblock": true, 00:17:28.685 "num_base_bdevs": 2, 00:17:28.685 "num_base_bdevs_discovered": 2, 00:17:28.685 "num_base_bdevs_operational": 2, 00:17:28.685 "process": { 00:17:28.685 "type": "rebuild", 00:17:28.685 "target": "spare", 00:17:28.685 "progress": { 00:17:28.685 "blocks": 2560, 00:17:28.685 "percent": 32 00:17:28.685 } 00:17:28.685 }, 00:17:28.685 "base_bdevs_list": [ 00:17:28.685 { 00:17:28.685 "name": "spare", 00:17:28.685 "uuid": "282dab56-86a7-5d62-a1c8-926df6c7f091", 00:17:28.685 "is_configured": true, 00:17:28.685 "data_offset": 256, 00:17:28.685 "data_size": 7936 00:17:28.685 }, 00:17:28.685 { 00:17:28.685 "name": "BaseBdev2", 00:17:28.685 "uuid": "a3060e8a-0cdc-52d4-bfc2-19f6bead7233", 00:17:28.685 "is_configured": true, 00:17:28.685 "data_offset": 256, 00:17:28.685 "data_size": 7936 00:17:28.685 } 00:17:28.685 ] 00:17:28.685 }' 00:17:28.685 12:09:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:28.685 12:09:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:28.685 12:09:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:28.685 12:09:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:28.685 12:09:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:28.685 12:09:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.685 12:09:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:28.685 [2024-11-19 12:09:32.024961] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:28.945 [2024-11-19 12:09:32.071049] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:28.945 [2024-11-19 12:09:32.071108] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:28.945 [2024-11-19 12:09:32.071138] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:28.945 [2024-11-19 12:09:32.071159] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:28.945 12:09:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.945 12:09:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:28.945 12:09:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:28.945 12:09:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:28.945 12:09:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:28.945 12:09:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:28.945 12:09:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:28.945 12:09:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:28.945 12:09:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:28.945 12:09:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:28.945 12:09:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:28.945 12:09:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.945 12:09:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.945 12:09:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:28.945 12:09:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.945 12:09:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.945 12:09:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:28.945 "name": "raid_bdev1", 00:17:28.945 "uuid": "aaa4a306-87f8-4597-bb98-c35371e2f6c5", 00:17:28.945 "strip_size_kb": 0, 00:17:28.945 "state": "online", 00:17:28.945 "raid_level": "raid1", 00:17:28.945 "superblock": true, 00:17:28.945 "num_base_bdevs": 2, 00:17:28.945 "num_base_bdevs_discovered": 1, 00:17:28.945 "num_base_bdevs_operational": 1, 00:17:28.945 "base_bdevs_list": [ 00:17:28.945 { 00:17:28.945 "name": null, 00:17:28.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.945 "is_configured": false, 00:17:28.945 "data_offset": 0, 00:17:28.945 "data_size": 7936 00:17:28.945 }, 00:17:28.945 { 00:17:28.945 "name": "BaseBdev2", 00:17:28.945 "uuid": "a3060e8a-0cdc-52d4-bfc2-19f6bead7233", 00:17:28.945 "is_configured": true, 00:17:28.945 "data_offset": 256, 00:17:28.945 "data_size": 7936 00:17:28.945 } 00:17:28.945 ] 00:17:28.945 }' 00:17:28.945 12:09:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:28.945 12:09:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.205 12:09:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:29.205 12:09:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:29.205 12:09:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:29.205 12:09:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:29.205 12:09:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:29.465 12:09:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.465 12:09:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:29.465 12:09:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.465 12:09:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.465 12:09:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.465 12:09:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:29.465 "name": "raid_bdev1", 00:17:29.465 "uuid": "aaa4a306-87f8-4597-bb98-c35371e2f6c5", 00:17:29.465 "strip_size_kb": 0, 00:17:29.465 "state": "online", 00:17:29.465 "raid_level": "raid1", 00:17:29.465 "superblock": true, 00:17:29.465 "num_base_bdevs": 2, 00:17:29.465 "num_base_bdevs_discovered": 1, 00:17:29.465 "num_base_bdevs_operational": 1, 00:17:29.465 "base_bdevs_list": [ 00:17:29.465 { 00:17:29.465 "name": null, 00:17:29.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.465 "is_configured": false, 00:17:29.465 "data_offset": 0, 00:17:29.465 "data_size": 7936 00:17:29.465 }, 00:17:29.465 { 00:17:29.465 "name": "BaseBdev2", 00:17:29.465 "uuid": "a3060e8a-0cdc-52d4-bfc2-19f6bead7233", 00:17:29.465 "is_configured": true, 00:17:29.465 "data_offset": 256, 00:17:29.465 "data_size": 7936 00:17:29.465 } 00:17:29.465 ] 00:17:29.465 }' 00:17:29.465 12:09:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:29.465 12:09:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:29.465 12:09:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:29.465 12:09:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:29.465 12:09:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:29.465 12:09:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.465 12:09:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.465 [2024-11-19 12:09:32.728148] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:29.465 [2024-11-19 12:09:32.743282] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:17:29.465 12:09:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.465 12:09:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:29.465 [2024-11-19 12:09:32.745145] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:30.403 12:09:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:30.403 12:09:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:30.403 12:09:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:30.403 12:09:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:30.403 12:09:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:30.403 12:09:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.403 12:09:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.403 12:09:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.403 12:09:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.403 12:09:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.664 12:09:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:30.664 "name": "raid_bdev1", 00:17:30.664 "uuid": "aaa4a306-87f8-4597-bb98-c35371e2f6c5", 00:17:30.664 "strip_size_kb": 0, 00:17:30.664 "state": "online", 00:17:30.664 "raid_level": "raid1", 00:17:30.664 "superblock": true, 00:17:30.664 "num_base_bdevs": 2, 00:17:30.664 "num_base_bdevs_discovered": 2, 00:17:30.664 "num_base_bdevs_operational": 2, 00:17:30.664 "process": { 00:17:30.664 "type": "rebuild", 00:17:30.664 "target": "spare", 00:17:30.664 "progress": { 00:17:30.664 "blocks": 2560, 00:17:30.664 "percent": 32 00:17:30.664 } 00:17:30.664 }, 00:17:30.664 "base_bdevs_list": [ 00:17:30.664 { 00:17:30.664 "name": "spare", 00:17:30.664 "uuid": "282dab56-86a7-5d62-a1c8-926df6c7f091", 00:17:30.664 "is_configured": true, 00:17:30.664 "data_offset": 256, 00:17:30.664 "data_size": 7936 00:17:30.664 }, 00:17:30.664 { 00:17:30.664 "name": "BaseBdev2", 00:17:30.664 "uuid": "a3060e8a-0cdc-52d4-bfc2-19f6bead7233", 00:17:30.664 "is_configured": true, 00:17:30.664 "data_offset": 256, 00:17:30.664 "data_size": 7936 00:17:30.664 } 00:17:30.664 ] 00:17:30.664 }' 00:17:30.664 12:09:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:30.664 12:09:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:30.664 12:09:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:30.664 12:09:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:30.664 12:09:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:30.664 12:09:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:30.664 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:30.664 12:09:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:30.664 12:09:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:30.664 12:09:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:30.664 12:09:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=666 00:17:30.664 12:09:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:30.664 12:09:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:30.664 12:09:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:30.664 12:09:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:30.664 12:09:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:30.664 12:09:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:30.664 12:09:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.664 12:09:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.664 12:09:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.664 12:09:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.664 12:09:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.664 12:09:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:30.664 "name": "raid_bdev1", 00:17:30.664 "uuid": "aaa4a306-87f8-4597-bb98-c35371e2f6c5", 00:17:30.664 "strip_size_kb": 0, 00:17:30.664 "state": "online", 00:17:30.664 "raid_level": "raid1", 00:17:30.664 "superblock": true, 00:17:30.664 "num_base_bdevs": 2, 00:17:30.664 "num_base_bdevs_discovered": 2, 00:17:30.664 "num_base_bdevs_operational": 2, 00:17:30.664 "process": { 00:17:30.664 "type": "rebuild", 00:17:30.664 "target": "spare", 00:17:30.664 "progress": { 00:17:30.664 "blocks": 2816, 00:17:30.664 "percent": 35 00:17:30.664 } 00:17:30.664 }, 00:17:30.664 "base_bdevs_list": [ 00:17:30.664 { 00:17:30.664 "name": "spare", 00:17:30.664 "uuid": "282dab56-86a7-5d62-a1c8-926df6c7f091", 00:17:30.664 "is_configured": true, 00:17:30.664 "data_offset": 256, 00:17:30.664 "data_size": 7936 00:17:30.664 }, 00:17:30.664 { 00:17:30.664 "name": "BaseBdev2", 00:17:30.664 "uuid": "a3060e8a-0cdc-52d4-bfc2-19f6bead7233", 00:17:30.664 "is_configured": true, 00:17:30.664 "data_offset": 256, 00:17:30.664 "data_size": 7936 00:17:30.664 } 00:17:30.664 ] 00:17:30.664 }' 00:17:30.664 12:09:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:30.664 12:09:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:30.664 12:09:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:30.924 12:09:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:30.924 12:09:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:31.862 12:09:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:31.862 12:09:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:31.862 12:09:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:31.862 12:09:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:31.862 12:09:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:31.862 12:09:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:31.862 12:09:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.862 12:09:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.862 12:09:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.862 12:09:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:31.862 12:09:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.862 12:09:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:31.862 "name": "raid_bdev1", 00:17:31.862 "uuid": "aaa4a306-87f8-4597-bb98-c35371e2f6c5", 00:17:31.862 "strip_size_kb": 0, 00:17:31.862 "state": "online", 00:17:31.862 "raid_level": "raid1", 00:17:31.862 "superblock": true, 00:17:31.862 "num_base_bdevs": 2, 00:17:31.862 "num_base_bdevs_discovered": 2, 00:17:31.862 "num_base_bdevs_operational": 2, 00:17:31.862 "process": { 00:17:31.862 "type": "rebuild", 00:17:31.862 "target": "spare", 00:17:31.862 "progress": { 00:17:31.862 "blocks": 5888, 00:17:31.862 "percent": 74 00:17:31.862 } 00:17:31.862 }, 00:17:31.862 "base_bdevs_list": [ 00:17:31.862 { 00:17:31.862 "name": "spare", 00:17:31.862 "uuid": "282dab56-86a7-5d62-a1c8-926df6c7f091", 00:17:31.862 "is_configured": true, 00:17:31.862 "data_offset": 256, 00:17:31.862 "data_size": 7936 00:17:31.862 }, 00:17:31.862 { 00:17:31.862 "name": "BaseBdev2", 00:17:31.862 "uuid": "a3060e8a-0cdc-52d4-bfc2-19f6bead7233", 00:17:31.862 "is_configured": true, 00:17:31.862 "data_offset": 256, 00:17:31.862 "data_size": 7936 00:17:31.862 } 00:17:31.862 ] 00:17:31.862 }' 00:17:31.862 12:09:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:31.862 12:09:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:31.862 12:09:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:31.862 12:09:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:31.862 12:09:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:32.801 [2024-11-19 12:09:35.857068] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:32.801 [2024-11-19 12:09:35.857142] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:32.801 [2024-11-19 12:09:35.857252] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:33.060 12:09:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:33.060 12:09:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:33.060 12:09:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:33.060 12:09:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:33.060 12:09:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:33.060 12:09:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:33.060 12:09:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.060 12:09:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.060 12:09:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.060 12:09:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:33.060 12:09:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.060 12:09:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:33.060 "name": "raid_bdev1", 00:17:33.060 "uuid": "aaa4a306-87f8-4597-bb98-c35371e2f6c5", 00:17:33.060 "strip_size_kb": 0, 00:17:33.060 "state": "online", 00:17:33.060 "raid_level": "raid1", 00:17:33.060 "superblock": true, 00:17:33.060 "num_base_bdevs": 2, 00:17:33.060 "num_base_bdevs_discovered": 2, 00:17:33.060 "num_base_bdevs_operational": 2, 00:17:33.060 "base_bdevs_list": [ 00:17:33.060 { 00:17:33.060 "name": "spare", 00:17:33.061 "uuid": "282dab56-86a7-5d62-a1c8-926df6c7f091", 00:17:33.061 "is_configured": true, 00:17:33.061 "data_offset": 256, 00:17:33.061 "data_size": 7936 00:17:33.061 }, 00:17:33.061 { 00:17:33.061 "name": "BaseBdev2", 00:17:33.061 "uuid": "a3060e8a-0cdc-52d4-bfc2-19f6bead7233", 00:17:33.061 "is_configured": true, 00:17:33.061 "data_offset": 256, 00:17:33.061 "data_size": 7936 00:17:33.061 } 00:17:33.061 ] 00:17:33.061 }' 00:17:33.061 12:09:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:33.061 12:09:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:33.061 12:09:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:33.061 12:09:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:33.061 12:09:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:17:33.061 12:09:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:33.061 12:09:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:33.061 12:09:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:33.061 12:09:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:33.061 12:09:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:33.061 12:09:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.061 12:09:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.061 12:09:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.061 12:09:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:33.061 12:09:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.061 12:09:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:33.061 "name": "raid_bdev1", 00:17:33.061 "uuid": "aaa4a306-87f8-4597-bb98-c35371e2f6c5", 00:17:33.061 "strip_size_kb": 0, 00:17:33.061 "state": "online", 00:17:33.061 "raid_level": "raid1", 00:17:33.061 "superblock": true, 00:17:33.061 "num_base_bdevs": 2, 00:17:33.061 "num_base_bdevs_discovered": 2, 00:17:33.061 "num_base_bdevs_operational": 2, 00:17:33.061 "base_bdevs_list": [ 00:17:33.061 { 00:17:33.061 "name": "spare", 00:17:33.061 "uuid": "282dab56-86a7-5d62-a1c8-926df6c7f091", 00:17:33.061 "is_configured": true, 00:17:33.061 "data_offset": 256, 00:17:33.061 "data_size": 7936 00:17:33.061 }, 00:17:33.061 { 00:17:33.061 "name": "BaseBdev2", 00:17:33.061 "uuid": "a3060e8a-0cdc-52d4-bfc2-19f6bead7233", 00:17:33.061 "is_configured": true, 00:17:33.061 "data_offset": 256, 00:17:33.061 "data_size": 7936 00:17:33.061 } 00:17:33.061 ] 00:17:33.061 }' 00:17:33.061 12:09:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:33.321 12:09:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:33.321 12:09:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:33.321 12:09:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:33.321 12:09:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:33.321 12:09:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:33.321 12:09:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:33.321 12:09:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:33.321 12:09:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:33.321 12:09:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:33.321 12:09:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:33.321 12:09:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:33.321 12:09:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:33.321 12:09:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:33.321 12:09:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.321 12:09:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.321 12:09:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.321 12:09:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:33.321 12:09:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.321 12:09:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:33.321 "name": "raid_bdev1", 00:17:33.321 "uuid": "aaa4a306-87f8-4597-bb98-c35371e2f6c5", 00:17:33.321 "strip_size_kb": 0, 00:17:33.321 "state": "online", 00:17:33.321 "raid_level": "raid1", 00:17:33.321 "superblock": true, 00:17:33.321 "num_base_bdevs": 2, 00:17:33.321 "num_base_bdevs_discovered": 2, 00:17:33.321 "num_base_bdevs_operational": 2, 00:17:33.321 "base_bdevs_list": [ 00:17:33.321 { 00:17:33.321 "name": "spare", 00:17:33.321 "uuid": "282dab56-86a7-5d62-a1c8-926df6c7f091", 00:17:33.321 "is_configured": true, 00:17:33.321 "data_offset": 256, 00:17:33.321 "data_size": 7936 00:17:33.321 }, 00:17:33.321 { 00:17:33.321 "name": "BaseBdev2", 00:17:33.321 "uuid": "a3060e8a-0cdc-52d4-bfc2-19f6bead7233", 00:17:33.321 "is_configured": true, 00:17:33.321 "data_offset": 256, 00:17:33.321 "data_size": 7936 00:17:33.321 } 00:17:33.321 ] 00:17:33.321 }' 00:17:33.321 12:09:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:33.321 12:09:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:33.584 12:09:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:33.584 12:09:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.584 12:09:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:33.584 [2024-11-19 12:09:36.944835] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:33.584 [2024-11-19 12:09:36.944875] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:33.584 [2024-11-19 12:09:36.944964] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:33.584 [2024-11-19 12:09:36.945044] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:33.584 [2024-11-19 12:09:36.945059] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:33.584 12:09:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.584 12:09:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.584 12:09:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:17:33.584 12:09:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.584 12:09:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:33.845 12:09:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.845 12:09:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:33.845 12:09:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:33.845 12:09:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:33.845 12:09:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:33.845 12:09:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:33.845 12:09:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:33.845 12:09:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:33.845 12:09:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:33.845 12:09:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:33.845 12:09:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:17:33.845 12:09:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:33.845 12:09:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:33.845 12:09:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:33.845 /dev/nbd0 00:17:33.845 12:09:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:34.105 12:09:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:34.105 12:09:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:34.105 12:09:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:17:34.105 12:09:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:34.105 12:09:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:34.105 12:09:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:34.105 12:09:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:17:34.105 12:09:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:34.105 12:09:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:34.105 12:09:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:34.105 1+0 records in 00:17:34.105 1+0 records out 00:17:34.105 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000394945 s, 10.4 MB/s 00:17:34.105 12:09:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:34.105 12:09:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:17:34.105 12:09:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:34.105 12:09:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:34.105 12:09:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:17:34.105 12:09:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:34.105 12:09:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:34.105 12:09:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:34.105 /dev/nbd1 00:17:34.105 12:09:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:34.105 12:09:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:34.105 12:09:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:34.105 12:09:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:17:34.105 12:09:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:34.105 12:09:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:34.105 12:09:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:34.366 12:09:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:17:34.366 12:09:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:34.366 12:09:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:34.366 12:09:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:34.366 1+0 records in 00:17:34.366 1+0 records out 00:17:34.366 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000471569 s, 8.7 MB/s 00:17:34.366 12:09:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:34.366 12:09:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:17:34.366 12:09:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:34.366 12:09:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:34.366 12:09:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:17:34.366 12:09:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:34.366 12:09:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:34.366 12:09:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:34.366 12:09:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:34.366 12:09:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:34.366 12:09:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:34.366 12:09:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:34.366 12:09:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:17:34.366 12:09:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:34.366 12:09:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:34.626 12:09:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:34.626 12:09:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:34.626 12:09:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:34.626 12:09:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:34.626 12:09:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:34.626 12:09:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:34.626 12:09:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:34.626 12:09:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:34.626 12:09:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:34.626 12:09:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:34.887 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:34.887 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:34.887 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:34.887 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:34.887 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:34.887 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:34.887 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:34.887 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:34.887 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:34.887 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:34.887 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.887 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.887 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.887 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:34.887 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.887 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.887 [2024-11-19 12:09:38.125150] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:34.887 [2024-11-19 12:09:38.125207] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:34.887 [2024-11-19 12:09:38.125232] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:34.887 [2024-11-19 12:09:38.125241] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:34.887 [2024-11-19 12:09:38.127374] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:34.887 [2024-11-19 12:09:38.127410] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:34.887 [2024-11-19 12:09:38.127506] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:34.887 [2024-11-19 12:09:38.127559] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:34.887 [2024-11-19 12:09:38.127722] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:34.887 spare 00:17:34.887 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.887 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:34.887 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.887 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.887 [2024-11-19 12:09:38.227624] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:34.887 [2024-11-19 12:09:38.227654] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:34.887 [2024-11-19 12:09:38.227924] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:17:34.887 [2024-11-19 12:09:38.228101] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:34.887 [2024-11-19 12:09:38.228118] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:34.887 [2024-11-19 12:09:38.228282] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:34.887 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.887 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:34.887 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:34.887 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:34.887 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:34.887 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:34.887 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:34.887 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:34.887 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:34.887 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:34.887 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:34.887 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.887 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.887 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.887 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.887 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.147 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:35.147 "name": "raid_bdev1", 00:17:35.147 "uuid": "aaa4a306-87f8-4597-bb98-c35371e2f6c5", 00:17:35.147 "strip_size_kb": 0, 00:17:35.147 "state": "online", 00:17:35.147 "raid_level": "raid1", 00:17:35.147 "superblock": true, 00:17:35.147 "num_base_bdevs": 2, 00:17:35.147 "num_base_bdevs_discovered": 2, 00:17:35.147 "num_base_bdevs_operational": 2, 00:17:35.147 "base_bdevs_list": [ 00:17:35.147 { 00:17:35.147 "name": "spare", 00:17:35.147 "uuid": "282dab56-86a7-5d62-a1c8-926df6c7f091", 00:17:35.147 "is_configured": true, 00:17:35.147 "data_offset": 256, 00:17:35.147 "data_size": 7936 00:17:35.147 }, 00:17:35.147 { 00:17:35.147 "name": "BaseBdev2", 00:17:35.147 "uuid": "a3060e8a-0cdc-52d4-bfc2-19f6bead7233", 00:17:35.147 "is_configured": true, 00:17:35.147 "data_offset": 256, 00:17:35.147 "data_size": 7936 00:17:35.147 } 00:17:35.147 ] 00:17:35.147 }' 00:17:35.147 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:35.147 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.408 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:35.408 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:35.408 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:35.408 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:35.408 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:35.408 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.408 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.408 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.408 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.408 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.408 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:35.408 "name": "raid_bdev1", 00:17:35.408 "uuid": "aaa4a306-87f8-4597-bb98-c35371e2f6c5", 00:17:35.408 "strip_size_kb": 0, 00:17:35.408 "state": "online", 00:17:35.408 "raid_level": "raid1", 00:17:35.408 "superblock": true, 00:17:35.408 "num_base_bdevs": 2, 00:17:35.408 "num_base_bdevs_discovered": 2, 00:17:35.408 "num_base_bdevs_operational": 2, 00:17:35.408 "base_bdevs_list": [ 00:17:35.408 { 00:17:35.408 "name": "spare", 00:17:35.408 "uuid": "282dab56-86a7-5d62-a1c8-926df6c7f091", 00:17:35.408 "is_configured": true, 00:17:35.408 "data_offset": 256, 00:17:35.408 "data_size": 7936 00:17:35.408 }, 00:17:35.408 { 00:17:35.408 "name": "BaseBdev2", 00:17:35.408 "uuid": "a3060e8a-0cdc-52d4-bfc2-19f6bead7233", 00:17:35.408 "is_configured": true, 00:17:35.408 "data_offset": 256, 00:17:35.408 "data_size": 7936 00:17:35.408 } 00:17:35.408 ] 00:17:35.408 }' 00:17:35.408 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:35.408 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:35.408 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:35.408 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:35.408 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.408 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.408 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.408 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:35.408 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.408 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:35.408 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:35.408 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.408 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.408 [2024-11-19 12:09:38.764108] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:35.408 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.408 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:35.408 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:35.408 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:35.408 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:35.408 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:35.408 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:35.408 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:35.408 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:35.408 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:35.408 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:35.408 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.408 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.408 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.408 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.668 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.668 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:35.668 "name": "raid_bdev1", 00:17:35.668 "uuid": "aaa4a306-87f8-4597-bb98-c35371e2f6c5", 00:17:35.668 "strip_size_kb": 0, 00:17:35.668 "state": "online", 00:17:35.668 "raid_level": "raid1", 00:17:35.668 "superblock": true, 00:17:35.668 "num_base_bdevs": 2, 00:17:35.668 "num_base_bdevs_discovered": 1, 00:17:35.668 "num_base_bdevs_operational": 1, 00:17:35.668 "base_bdevs_list": [ 00:17:35.668 { 00:17:35.668 "name": null, 00:17:35.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.668 "is_configured": false, 00:17:35.668 "data_offset": 0, 00:17:35.668 "data_size": 7936 00:17:35.668 }, 00:17:35.668 { 00:17:35.668 "name": "BaseBdev2", 00:17:35.668 "uuid": "a3060e8a-0cdc-52d4-bfc2-19f6bead7233", 00:17:35.668 "is_configured": true, 00:17:35.668 "data_offset": 256, 00:17:35.668 "data_size": 7936 00:17:35.668 } 00:17:35.668 ] 00:17:35.668 }' 00:17:35.668 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:35.668 12:09:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.948 12:09:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:35.948 12:09:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.948 12:09:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.948 [2024-11-19 12:09:39.271420] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:35.948 [2024-11-19 12:09:39.271613] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:35.948 [2024-11-19 12:09:39.271641] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:35.948 [2024-11-19 12:09:39.271672] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:35.948 [2024-11-19 12:09:39.287428] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:17:35.948 12:09:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.948 12:09:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:35.948 [2024-11-19 12:09:39.289252] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:37.331 12:09:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:37.331 12:09:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:37.331 12:09:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:37.331 12:09:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:37.331 12:09:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:37.331 12:09:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.331 12:09:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.331 12:09:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.331 12:09:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:37.331 12:09:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.331 12:09:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:37.331 "name": "raid_bdev1", 00:17:37.331 "uuid": "aaa4a306-87f8-4597-bb98-c35371e2f6c5", 00:17:37.331 "strip_size_kb": 0, 00:17:37.331 "state": "online", 00:17:37.331 "raid_level": "raid1", 00:17:37.331 "superblock": true, 00:17:37.331 "num_base_bdevs": 2, 00:17:37.331 "num_base_bdevs_discovered": 2, 00:17:37.331 "num_base_bdevs_operational": 2, 00:17:37.331 "process": { 00:17:37.331 "type": "rebuild", 00:17:37.331 "target": "spare", 00:17:37.331 "progress": { 00:17:37.331 "blocks": 2560, 00:17:37.331 "percent": 32 00:17:37.331 } 00:17:37.331 }, 00:17:37.331 "base_bdevs_list": [ 00:17:37.331 { 00:17:37.331 "name": "spare", 00:17:37.331 "uuid": "282dab56-86a7-5d62-a1c8-926df6c7f091", 00:17:37.331 "is_configured": true, 00:17:37.331 "data_offset": 256, 00:17:37.331 "data_size": 7936 00:17:37.331 }, 00:17:37.331 { 00:17:37.331 "name": "BaseBdev2", 00:17:37.331 "uuid": "a3060e8a-0cdc-52d4-bfc2-19f6bead7233", 00:17:37.331 "is_configured": true, 00:17:37.331 "data_offset": 256, 00:17:37.331 "data_size": 7936 00:17:37.331 } 00:17:37.331 ] 00:17:37.331 }' 00:17:37.331 12:09:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:37.331 12:09:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:37.331 12:09:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:37.331 12:09:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:37.331 12:09:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:37.331 12:09:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.331 12:09:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:37.331 [2024-11-19 12:09:40.448584] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:37.331 [2024-11-19 12:09:40.493986] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:37.331 [2024-11-19 12:09:40.494048] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:37.331 [2024-11-19 12:09:40.494078] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:37.331 [2024-11-19 12:09:40.494087] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:37.331 12:09:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.331 12:09:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:37.331 12:09:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:37.331 12:09:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:37.331 12:09:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:37.331 12:09:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:37.331 12:09:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:37.331 12:09:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:37.331 12:09:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:37.331 12:09:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:37.331 12:09:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:37.331 12:09:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.331 12:09:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.331 12:09:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.331 12:09:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:37.331 12:09:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.331 12:09:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:37.331 "name": "raid_bdev1", 00:17:37.331 "uuid": "aaa4a306-87f8-4597-bb98-c35371e2f6c5", 00:17:37.331 "strip_size_kb": 0, 00:17:37.331 "state": "online", 00:17:37.331 "raid_level": "raid1", 00:17:37.331 "superblock": true, 00:17:37.331 "num_base_bdevs": 2, 00:17:37.331 "num_base_bdevs_discovered": 1, 00:17:37.331 "num_base_bdevs_operational": 1, 00:17:37.331 "base_bdevs_list": [ 00:17:37.331 { 00:17:37.331 "name": null, 00:17:37.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.331 "is_configured": false, 00:17:37.331 "data_offset": 0, 00:17:37.331 "data_size": 7936 00:17:37.331 }, 00:17:37.331 { 00:17:37.331 "name": "BaseBdev2", 00:17:37.331 "uuid": "a3060e8a-0cdc-52d4-bfc2-19f6bead7233", 00:17:37.331 "is_configured": true, 00:17:37.331 "data_offset": 256, 00:17:37.331 "data_size": 7936 00:17:37.331 } 00:17:37.331 ] 00:17:37.331 }' 00:17:37.331 12:09:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:37.331 12:09:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:37.592 12:09:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:37.592 12:09:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.592 12:09:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:37.592 [2024-11-19 12:09:40.942497] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:37.592 [2024-11-19 12:09:40.942577] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:37.592 [2024-11-19 12:09:40.942598] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:37.592 [2024-11-19 12:09:40.942609] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:37.592 [2024-11-19 12:09:40.943079] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:37.592 [2024-11-19 12:09:40.943108] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:37.592 [2024-11-19 12:09:40.943208] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:37.592 [2024-11-19 12:09:40.943227] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:37.592 [2024-11-19 12:09:40.943237] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:37.592 [2024-11-19 12:09:40.943262] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:37.592 [2024-11-19 12:09:40.958642] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:17:37.592 spare 00:17:37.592 12:09:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.592 12:09:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:37.592 [2024-11-19 12:09:40.960435] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:38.973 12:09:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:38.973 12:09:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:38.973 12:09:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:38.973 12:09:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:38.973 12:09:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:38.973 12:09:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.973 12:09:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.973 12:09:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.973 12:09:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:38.973 12:09:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.973 12:09:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:38.973 "name": "raid_bdev1", 00:17:38.973 "uuid": "aaa4a306-87f8-4597-bb98-c35371e2f6c5", 00:17:38.973 "strip_size_kb": 0, 00:17:38.973 "state": "online", 00:17:38.973 "raid_level": "raid1", 00:17:38.973 "superblock": true, 00:17:38.973 "num_base_bdevs": 2, 00:17:38.973 "num_base_bdevs_discovered": 2, 00:17:38.973 "num_base_bdevs_operational": 2, 00:17:38.973 "process": { 00:17:38.973 "type": "rebuild", 00:17:38.973 "target": "spare", 00:17:38.973 "progress": { 00:17:38.973 "blocks": 2560, 00:17:38.973 "percent": 32 00:17:38.973 } 00:17:38.973 }, 00:17:38.973 "base_bdevs_list": [ 00:17:38.973 { 00:17:38.973 "name": "spare", 00:17:38.973 "uuid": "282dab56-86a7-5d62-a1c8-926df6c7f091", 00:17:38.973 "is_configured": true, 00:17:38.973 "data_offset": 256, 00:17:38.973 "data_size": 7936 00:17:38.973 }, 00:17:38.973 { 00:17:38.973 "name": "BaseBdev2", 00:17:38.973 "uuid": "a3060e8a-0cdc-52d4-bfc2-19f6bead7233", 00:17:38.973 "is_configured": true, 00:17:38.973 "data_offset": 256, 00:17:38.973 "data_size": 7936 00:17:38.973 } 00:17:38.973 ] 00:17:38.973 }' 00:17:38.973 12:09:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:38.973 12:09:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:38.973 12:09:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:38.973 12:09:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:38.973 12:09:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:38.973 12:09:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.973 12:09:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:38.973 [2024-11-19 12:09:42.108299] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:38.973 [2024-11-19 12:09:42.165237] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:38.973 [2024-11-19 12:09:42.165288] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:38.973 [2024-11-19 12:09:42.165319] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:38.973 [2024-11-19 12:09:42.165326] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:38.973 12:09:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.973 12:09:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:38.973 12:09:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:38.973 12:09:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:38.973 12:09:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:38.973 12:09:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:38.973 12:09:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:38.973 12:09:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:38.973 12:09:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:38.973 12:09:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:38.973 12:09:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:38.973 12:09:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.973 12:09:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.973 12:09:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:38.973 12:09:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.973 12:09:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.973 12:09:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:38.973 "name": "raid_bdev1", 00:17:38.973 "uuid": "aaa4a306-87f8-4597-bb98-c35371e2f6c5", 00:17:38.973 "strip_size_kb": 0, 00:17:38.973 "state": "online", 00:17:38.973 "raid_level": "raid1", 00:17:38.973 "superblock": true, 00:17:38.973 "num_base_bdevs": 2, 00:17:38.973 "num_base_bdevs_discovered": 1, 00:17:38.973 "num_base_bdevs_operational": 1, 00:17:38.973 "base_bdevs_list": [ 00:17:38.973 { 00:17:38.973 "name": null, 00:17:38.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.973 "is_configured": false, 00:17:38.973 "data_offset": 0, 00:17:38.973 "data_size": 7936 00:17:38.973 }, 00:17:38.973 { 00:17:38.973 "name": "BaseBdev2", 00:17:38.973 "uuid": "a3060e8a-0cdc-52d4-bfc2-19f6bead7233", 00:17:38.973 "is_configured": true, 00:17:38.973 "data_offset": 256, 00:17:38.973 "data_size": 7936 00:17:38.973 } 00:17:38.973 ] 00:17:38.973 }' 00:17:38.973 12:09:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:38.973 12:09:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.542 12:09:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:39.542 12:09:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:39.542 12:09:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:39.542 12:09:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:39.542 12:09:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:39.542 12:09:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.542 12:09:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.542 12:09:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.542 12:09:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.542 12:09:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.542 12:09:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:39.542 "name": "raid_bdev1", 00:17:39.542 "uuid": "aaa4a306-87f8-4597-bb98-c35371e2f6c5", 00:17:39.542 "strip_size_kb": 0, 00:17:39.542 "state": "online", 00:17:39.542 "raid_level": "raid1", 00:17:39.542 "superblock": true, 00:17:39.542 "num_base_bdevs": 2, 00:17:39.542 "num_base_bdevs_discovered": 1, 00:17:39.542 "num_base_bdevs_operational": 1, 00:17:39.542 "base_bdevs_list": [ 00:17:39.542 { 00:17:39.542 "name": null, 00:17:39.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.542 "is_configured": false, 00:17:39.542 "data_offset": 0, 00:17:39.542 "data_size": 7936 00:17:39.542 }, 00:17:39.542 { 00:17:39.542 "name": "BaseBdev2", 00:17:39.542 "uuid": "a3060e8a-0cdc-52d4-bfc2-19f6bead7233", 00:17:39.542 "is_configured": true, 00:17:39.542 "data_offset": 256, 00:17:39.542 "data_size": 7936 00:17:39.542 } 00:17:39.542 ] 00:17:39.542 }' 00:17:39.542 12:09:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:39.542 12:09:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:39.542 12:09:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:39.542 12:09:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:39.542 12:09:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:39.542 12:09:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.542 12:09:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.542 12:09:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.542 12:09:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:39.542 12:09:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.542 12:09:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.542 [2024-11-19 12:09:42.817873] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:39.542 [2024-11-19 12:09:42.817927] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:39.542 [2024-11-19 12:09:42.817965] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:39.542 [2024-11-19 12:09:42.817983] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:39.542 [2024-11-19 12:09:42.818413] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:39.542 [2024-11-19 12:09:42.818449] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:39.542 [2024-11-19 12:09:42.818529] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:39.542 [2024-11-19 12:09:42.818547] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:39.542 [2024-11-19 12:09:42.818556] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:39.542 [2024-11-19 12:09:42.818566] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:39.542 BaseBdev1 00:17:39.542 12:09:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.542 12:09:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:40.482 12:09:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:40.482 12:09:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:40.482 12:09:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:40.482 12:09:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:40.482 12:09:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:40.482 12:09:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:40.482 12:09:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:40.482 12:09:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:40.482 12:09:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:40.482 12:09:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:40.482 12:09:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.482 12:09:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.482 12:09:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.482 12:09:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.482 12:09:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.742 12:09:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:40.742 "name": "raid_bdev1", 00:17:40.742 "uuid": "aaa4a306-87f8-4597-bb98-c35371e2f6c5", 00:17:40.742 "strip_size_kb": 0, 00:17:40.742 "state": "online", 00:17:40.742 "raid_level": "raid1", 00:17:40.742 "superblock": true, 00:17:40.742 "num_base_bdevs": 2, 00:17:40.742 "num_base_bdevs_discovered": 1, 00:17:40.742 "num_base_bdevs_operational": 1, 00:17:40.742 "base_bdevs_list": [ 00:17:40.742 { 00:17:40.742 "name": null, 00:17:40.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.742 "is_configured": false, 00:17:40.742 "data_offset": 0, 00:17:40.742 "data_size": 7936 00:17:40.742 }, 00:17:40.742 { 00:17:40.742 "name": "BaseBdev2", 00:17:40.742 "uuid": "a3060e8a-0cdc-52d4-bfc2-19f6bead7233", 00:17:40.742 "is_configured": true, 00:17:40.742 "data_offset": 256, 00:17:40.742 "data_size": 7936 00:17:40.742 } 00:17:40.742 ] 00:17:40.742 }' 00:17:40.742 12:09:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:40.742 12:09:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:41.003 12:09:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:41.003 12:09:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:41.003 12:09:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:41.003 12:09:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:41.003 12:09:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:41.003 12:09:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.003 12:09:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.003 12:09:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.003 12:09:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:41.003 12:09:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.003 12:09:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:41.003 "name": "raid_bdev1", 00:17:41.003 "uuid": "aaa4a306-87f8-4597-bb98-c35371e2f6c5", 00:17:41.003 "strip_size_kb": 0, 00:17:41.003 "state": "online", 00:17:41.003 "raid_level": "raid1", 00:17:41.003 "superblock": true, 00:17:41.003 "num_base_bdevs": 2, 00:17:41.003 "num_base_bdevs_discovered": 1, 00:17:41.003 "num_base_bdevs_operational": 1, 00:17:41.003 "base_bdevs_list": [ 00:17:41.003 { 00:17:41.003 "name": null, 00:17:41.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.003 "is_configured": false, 00:17:41.003 "data_offset": 0, 00:17:41.003 "data_size": 7936 00:17:41.003 }, 00:17:41.003 { 00:17:41.003 "name": "BaseBdev2", 00:17:41.003 "uuid": "a3060e8a-0cdc-52d4-bfc2-19f6bead7233", 00:17:41.003 "is_configured": true, 00:17:41.003 "data_offset": 256, 00:17:41.003 "data_size": 7936 00:17:41.003 } 00:17:41.003 ] 00:17:41.003 }' 00:17:41.003 12:09:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:41.263 12:09:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:41.263 12:09:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:41.263 12:09:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:41.263 12:09:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:41.263 12:09:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:17:41.263 12:09:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:41.263 12:09:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:41.263 12:09:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:41.263 12:09:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:41.263 12:09:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:41.263 12:09:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:41.263 12:09:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.263 12:09:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:41.263 [2024-11-19 12:09:44.491254] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:41.263 [2024-11-19 12:09:44.491432] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:41.263 [2024-11-19 12:09:44.491455] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:41.263 request: 00:17:41.263 { 00:17:41.263 "base_bdev": "BaseBdev1", 00:17:41.263 "raid_bdev": "raid_bdev1", 00:17:41.263 "method": "bdev_raid_add_base_bdev", 00:17:41.263 "req_id": 1 00:17:41.263 } 00:17:41.263 Got JSON-RPC error response 00:17:41.263 response: 00:17:41.263 { 00:17:41.263 "code": -22, 00:17:41.263 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:41.263 } 00:17:41.263 12:09:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:41.263 12:09:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:17:41.263 12:09:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:41.263 12:09:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:41.263 12:09:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:41.263 12:09:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:42.201 12:09:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:42.202 12:09:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:42.202 12:09:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:42.202 12:09:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:42.202 12:09:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:42.202 12:09:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:42.202 12:09:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:42.202 12:09:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:42.202 12:09:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:42.202 12:09:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:42.202 12:09:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.202 12:09:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.202 12:09:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.202 12:09:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.202 12:09:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.202 12:09:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:42.202 "name": "raid_bdev1", 00:17:42.202 "uuid": "aaa4a306-87f8-4597-bb98-c35371e2f6c5", 00:17:42.202 "strip_size_kb": 0, 00:17:42.202 "state": "online", 00:17:42.202 "raid_level": "raid1", 00:17:42.202 "superblock": true, 00:17:42.202 "num_base_bdevs": 2, 00:17:42.202 "num_base_bdevs_discovered": 1, 00:17:42.202 "num_base_bdevs_operational": 1, 00:17:42.202 "base_bdevs_list": [ 00:17:42.202 { 00:17:42.202 "name": null, 00:17:42.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.202 "is_configured": false, 00:17:42.202 "data_offset": 0, 00:17:42.202 "data_size": 7936 00:17:42.202 }, 00:17:42.202 { 00:17:42.202 "name": "BaseBdev2", 00:17:42.202 "uuid": "a3060e8a-0cdc-52d4-bfc2-19f6bead7233", 00:17:42.202 "is_configured": true, 00:17:42.202 "data_offset": 256, 00:17:42.202 "data_size": 7936 00:17:42.202 } 00:17:42.202 ] 00:17:42.202 }' 00:17:42.202 12:09:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:42.202 12:09:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.771 12:09:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:42.771 12:09:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:42.771 12:09:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:42.771 12:09:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:42.771 12:09:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:42.771 12:09:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.771 12:09:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.771 12:09:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.771 12:09:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.771 12:09:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.771 12:09:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:42.771 "name": "raid_bdev1", 00:17:42.771 "uuid": "aaa4a306-87f8-4597-bb98-c35371e2f6c5", 00:17:42.771 "strip_size_kb": 0, 00:17:42.771 "state": "online", 00:17:42.771 "raid_level": "raid1", 00:17:42.771 "superblock": true, 00:17:42.771 "num_base_bdevs": 2, 00:17:42.771 "num_base_bdevs_discovered": 1, 00:17:42.771 "num_base_bdevs_operational": 1, 00:17:42.771 "base_bdevs_list": [ 00:17:42.771 { 00:17:42.771 "name": null, 00:17:42.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.771 "is_configured": false, 00:17:42.771 "data_offset": 0, 00:17:42.771 "data_size": 7936 00:17:42.771 }, 00:17:42.771 { 00:17:42.771 "name": "BaseBdev2", 00:17:42.771 "uuid": "a3060e8a-0cdc-52d4-bfc2-19f6bead7233", 00:17:42.771 "is_configured": true, 00:17:42.771 "data_offset": 256, 00:17:42.771 "data_size": 7936 00:17:42.771 } 00:17:42.771 ] 00:17:42.771 }' 00:17:42.771 12:09:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:42.771 12:09:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:42.771 12:09:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:42.771 12:09:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:42.771 12:09:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 86422 00:17:42.771 12:09:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86422 ']' 00:17:42.771 12:09:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86422 00:17:42.771 12:09:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:17:42.771 12:09:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:42.771 12:09:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86422 00:17:42.771 killing process with pid 86422 00:17:42.771 Received shutdown signal, test time was about 60.000000 seconds 00:17:42.771 00:17:42.771 Latency(us) 00:17:42.771 [2024-11-19T12:09:46.148Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:42.771 [2024-11-19T12:09:46.148Z] =================================================================================================================== 00:17:42.771 [2024-11-19T12:09:46.148Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:42.771 12:09:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:42.771 12:09:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:42.771 12:09:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86422' 00:17:42.771 12:09:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86422 00:17:42.771 [2024-11-19 12:09:46.093767] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:42.771 [2024-11-19 12:09:46.093883] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:42.771 [2024-11-19 12:09:46.093932] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:42.771 12:09:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86422 00:17:42.771 [2024-11-19 12:09:46.093942] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:43.031 [2024-11-19 12:09:46.376768] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:44.411 ************************************ 00:17:44.411 END TEST raid_rebuild_test_sb_4k 00:17:44.411 12:09:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:17:44.411 00:17:44.411 real 0m19.781s 00:17:44.411 user 0m25.849s 00:17:44.411 sys 0m2.647s 00:17:44.411 12:09:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:44.411 12:09:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:44.411 ************************************ 00:17:44.411 12:09:47 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:17:44.411 12:09:47 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:17:44.411 12:09:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:44.411 12:09:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:44.411 12:09:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:44.411 ************************************ 00:17:44.411 START TEST raid_state_function_test_sb_md_separate 00:17:44.411 ************************************ 00:17:44.411 12:09:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:17:44.411 12:09:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:44.411 12:09:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:17:44.411 12:09:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:44.411 12:09:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:44.411 12:09:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:44.411 12:09:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:44.411 12:09:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:44.411 12:09:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:44.411 12:09:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:44.411 12:09:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:44.411 12:09:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:44.411 12:09:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:44.411 12:09:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:44.411 12:09:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:44.411 12:09:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:44.411 12:09:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:44.411 12:09:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:44.411 12:09:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:44.411 12:09:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:44.411 12:09:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:44.411 12:09:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:44.411 12:09:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:44.411 12:09:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87114 00:17:44.411 12:09:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:44.411 12:09:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87114' 00:17:44.411 Process raid pid: 87114 00:17:44.411 12:09:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87114 00:17:44.411 12:09:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87114 ']' 00:17:44.411 12:09:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:44.411 12:09:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:44.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:44.411 12:09:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:44.411 12:09:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:44.411 12:09:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.411 [2024-11-19 12:09:47.589098] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:17:44.411 [2024-11-19 12:09:47.589220] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:44.411 [2024-11-19 12:09:47.776818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.671 [2024-11-19 12:09:47.884688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:44.931 [2024-11-19 12:09:48.069746] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:44.931 [2024-11-19 12:09:48.069785] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:45.191 12:09:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:45.191 12:09:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:17:45.191 12:09:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:45.191 12:09:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.191 12:09:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.191 [2024-11-19 12:09:48.397846] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:45.191 [2024-11-19 12:09:48.397899] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:45.191 [2024-11-19 12:09:48.397909] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:45.191 [2024-11-19 12:09:48.397935] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:45.191 12:09:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.191 12:09:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:45.191 12:09:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:45.191 12:09:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:45.191 12:09:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:45.191 12:09:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:45.191 12:09:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:45.191 12:09:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:45.191 12:09:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:45.191 12:09:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:45.191 12:09:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:45.191 12:09:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:45.191 12:09:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.191 12:09:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.191 12:09:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.191 12:09:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.191 12:09:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:45.191 "name": "Existed_Raid", 00:17:45.191 "uuid": "6c1202c3-4238-40db-8c0a-88d52e32339d", 00:17:45.191 "strip_size_kb": 0, 00:17:45.191 "state": "configuring", 00:17:45.191 "raid_level": "raid1", 00:17:45.191 "superblock": true, 00:17:45.191 "num_base_bdevs": 2, 00:17:45.191 "num_base_bdevs_discovered": 0, 00:17:45.191 "num_base_bdevs_operational": 2, 00:17:45.191 "base_bdevs_list": [ 00:17:45.191 { 00:17:45.191 "name": "BaseBdev1", 00:17:45.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.191 "is_configured": false, 00:17:45.192 "data_offset": 0, 00:17:45.192 "data_size": 0 00:17:45.192 }, 00:17:45.192 { 00:17:45.192 "name": "BaseBdev2", 00:17:45.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.192 "is_configured": false, 00:17:45.192 "data_offset": 0, 00:17:45.192 "data_size": 0 00:17:45.192 } 00:17:45.192 ] 00:17:45.192 }' 00:17:45.192 12:09:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:45.192 12:09:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.763 12:09:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:45.763 12:09:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.763 12:09:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.763 [2024-11-19 12:09:48.841058] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:45.763 [2024-11-19 12:09:48.841096] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:45.763 12:09:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.763 12:09:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:45.763 12:09:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.763 12:09:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.763 [2024-11-19 12:09:48.853033] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:45.763 [2024-11-19 12:09:48.853072] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:45.763 [2024-11-19 12:09:48.853081] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:45.763 [2024-11-19 12:09:48.853093] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:45.763 12:09:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.763 12:09:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:17:45.763 12:09:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.763 12:09:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.763 [2024-11-19 12:09:48.899917] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:45.763 BaseBdev1 00:17:45.763 12:09:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.763 12:09:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:45.763 12:09:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:45.763 12:09:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:45.763 12:09:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:17:45.763 12:09:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:45.763 12:09:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:45.763 12:09:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:45.763 12:09:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.763 12:09:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.763 12:09:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.763 12:09:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:45.763 12:09:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.763 12:09:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.763 [ 00:17:45.763 { 00:17:45.763 "name": "BaseBdev1", 00:17:45.763 "aliases": [ 00:17:45.763 "48c8992d-4bc5-4f78-b439-d94fa40d571a" 00:17:45.763 ], 00:17:45.763 "product_name": "Malloc disk", 00:17:45.763 "block_size": 4096, 00:17:45.763 "num_blocks": 8192, 00:17:45.763 "uuid": "48c8992d-4bc5-4f78-b439-d94fa40d571a", 00:17:45.763 "md_size": 32, 00:17:45.763 "md_interleave": false, 00:17:45.763 "dif_type": 0, 00:17:45.763 "assigned_rate_limits": { 00:17:45.763 "rw_ios_per_sec": 0, 00:17:45.763 "rw_mbytes_per_sec": 0, 00:17:45.763 "r_mbytes_per_sec": 0, 00:17:45.763 "w_mbytes_per_sec": 0 00:17:45.763 }, 00:17:45.763 "claimed": true, 00:17:45.763 "claim_type": "exclusive_write", 00:17:45.763 "zoned": false, 00:17:45.763 "supported_io_types": { 00:17:45.763 "read": true, 00:17:45.763 "write": true, 00:17:45.763 "unmap": true, 00:17:45.763 "flush": true, 00:17:45.763 "reset": true, 00:17:45.763 "nvme_admin": false, 00:17:45.763 "nvme_io": false, 00:17:45.763 "nvme_io_md": false, 00:17:45.763 "write_zeroes": true, 00:17:45.763 "zcopy": true, 00:17:45.763 "get_zone_info": false, 00:17:45.763 "zone_management": false, 00:17:45.763 "zone_append": false, 00:17:45.763 "compare": false, 00:17:45.763 "compare_and_write": false, 00:17:45.763 "abort": true, 00:17:45.763 "seek_hole": false, 00:17:45.763 "seek_data": false, 00:17:45.763 "copy": true, 00:17:45.763 "nvme_iov_md": false 00:17:45.763 }, 00:17:45.763 "memory_domains": [ 00:17:45.763 { 00:17:45.763 "dma_device_id": "system", 00:17:45.763 "dma_device_type": 1 00:17:45.763 }, 00:17:45.763 { 00:17:45.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:45.763 "dma_device_type": 2 00:17:45.763 } 00:17:45.763 ], 00:17:45.763 "driver_specific": {} 00:17:45.763 } 00:17:45.763 ] 00:17:45.763 12:09:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.763 12:09:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:17:45.763 12:09:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:45.763 12:09:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:45.763 12:09:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:45.763 12:09:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:45.763 12:09:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:45.763 12:09:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:45.763 12:09:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:45.763 12:09:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:45.763 12:09:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:45.763 12:09:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:45.763 12:09:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.763 12:09:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:45.763 12:09:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.763 12:09:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.763 12:09:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.763 12:09:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:45.763 "name": "Existed_Raid", 00:17:45.763 "uuid": "aa188c5f-f120-4076-bd3b-e1ea977cc204", 00:17:45.764 "strip_size_kb": 0, 00:17:45.764 "state": "configuring", 00:17:45.764 "raid_level": "raid1", 00:17:45.764 "superblock": true, 00:17:45.764 "num_base_bdevs": 2, 00:17:45.764 "num_base_bdevs_discovered": 1, 00:17:45.764 "num_base_bdevs_operational": 2, 00:17:45.764 "base_bdevs_list": [ 00:17:45.764 { 00:17:45.764 "name": "BaseBdev1", 00:17:45.764 "uuid": "48c8992d-4bc5-4f78-b439-d94fa40d571a", 00:17:45.764 "is_configured": true, 00:17:45.764 "data_offset": 256, 00:17:45.764 "data_size": 7936 00:17:45.764 }, 00:17:45.764 { 00:17:45.764 "name": "BaseBdev2", 00:17:45.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.764 "is_configured": false, 00:17:45.764 "data_offset": 0, 00:17:45.764 "data_size": 0 00:17:45.764 } 00:17:45.764 ] 00:17:45.764 }' 00:17:45.764 12:09:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:45.764 12:09:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:46.044 12:09:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:46.044 12:09:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.044 12:09:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:46.044 [2024-11-19 12:09:49.371252] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:46.044 [2024-11-19 12:09:49.371297] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:46.044 12:09:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.044 12:09:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:46.044 12:09:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.044 12:09:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:46.044 [2024-11-19 12:09:49.379300] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:46.044 [2024-11-19 12:09:49.381066] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:46.044 [2024-11-19 12:09:49.381109] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:46.044 12:09:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.044 12:09:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:46.044 12:09:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:46.045 12:09:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:46.045 12:09:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:46.045 12:09:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:46.045 12:09:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:46.045 12:09:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:46.045 12:09:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:46.045 12:09:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:46.045 12:09:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:46.045 12:09:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:46.045 12:09:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:46.045 12:09:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:46.045 12:09:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.045 12:09:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.045 12:09:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:46.045 12:09:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.311 12:09:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:46.311 "name": "Existed_Raid", 00:17:46.311 "uuid": "0ca3b4d4-d115-4a17-bac3-cf3b34f2337a", 00:17:46.311 "strip_size_kb": 0, 00:17:46.311 "state": "configuring", 00:17:46.311 "raid_level": "raid1", 00:17:46.311 "superblock": true, 00:17:46.311 "num_base_bdevs": 2, 00:17:46.311 "num_base_bdevs_discovered": 1, 00:17:46.311 "num_base_bdevs_operational": 2, 00:17:46.311 "base_bdevs_list": [ 00:17:46.311 { 00:17:46.311 "name": "BaseBdev1", 00:17:46.311 "uuid": "48c8992d-4bc5-4f78-b439-d94fa40d571a", 00:17:46.311 "is_configured": true, 00:17:46.311 "data_offset": 256, 00:17:46.311 "data_size": 7936 00:17:46.311 }, 00:17:46.311 { 00:17:46.311 "name": "BaseBdev2", 00:17:46.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.311 "is_configured": false, 00:17:46.311 "data_offset": 0, 00:17:46.311 "data_size": 0 00:17:46.311 } 00:17:46.311 ] 00:17:46.311 }' 00:17:46.311 12:09:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:46.311 12:09:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:46.572 12:09:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:17:46.572 12:09:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.572 12:09:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:46.572 [2024-11-19 12:09:49.865618] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:46.572 [2024-11-19 12:09:49.865844] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:46.572 [2024-11-19 12:09:49.865857] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:46.572 [2024-11-19 12:09:49.865942] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:46.572 [2024-11-19 12:09:49.866092] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:46.572 [2024-11-19 12:09:49.866103] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:46.572 [2024-11-19 12:09:49.866189] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:46.572 BaseBdev2 00:17:46.572 12:09:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.572 12:09:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:46.572 12:09:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:46.572 12:09:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:46.572 12:09:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:17:46.572 12:09:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:46.572 12:09:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:46.572 12:09:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:46.572 12:09:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.572 12:09:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:46.572 12:09:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.572 12:09:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:46.572 12:09:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.572 12:09:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:46.572 [ 00:17:46.572 { 00:17:46.572 "name": "BaseBdev2", 00:17:46.572 "aliases": [ 00:17:46.572 "a6ac3dbe-f834-4ca0-8bca-cb6ff02dc5a0" 00:17:46.572 ], 00:17:46.572 "product_name": "Malloc disk", 00:17:46.572 "block_size": 4096, 00:17:46.572 "num_blocks": 8192, 00:17:46.572 "uuid": "a6ac3dbe-f834-4ca0-8bca-cb6ff02dc5a0", 00:17:46.572 "md_size": 32, 00:17:46.572 "md_interleave": false, 00:17:46.572 "dif_type": 0, 00:17:46.572 "assigned_rate_limits": { 00:17:46.572 "rw_ios_per_sec": 0, 00:17:46.572 "rw_mbytes_per_sec": 0, 00:17:46.572 "r_mbytes_per_sec": 0, 00:17:46.572 "w_mbytes_per_sec": 0 00:17:46.572 }, 00:17:46.572 "claimed": true, 00:17:46.572 "claim_type": "exclusive_write", 00:17:46.572 "zoned": false, 00:17:46.572 "supported_io_types": { 00:17:46.572 "read": true, 00:17:46.572 "write": true, 00:17:46.572 "unmap": true, 00:17:46.572 "flush": true, 00:17:46.572 "reset": true, 00:17:46.572 "nvme_admin": false, 00:17:46.572 "nvme_io": false, 00:17:46.572 "nvme_io_md": false, 00:17:46.572 "write_zeroes": true, 00:17:46.572 "zcopy": true, 00:17:46.572 "get_zone_info": false, 00:17:46.572 "zone_management": false, 00:17:46.572 "zone_append": false, 00:17:46.572 "compare": false, 00:17:46.572 "compare_and_write": false, 00:17:46.572 "abort": true, 00:17:46.572 "seek_hole": false, 00:17:46.572 "seek_data": false, 00:17:46.572 "copy": true, 00:17:46.572 "nvme_iov_md": false 00:17:46.572 }, 00:17:46.572 "memory_domains": [ 00:17:46.572 { 00:17:46.572 "dma_device_id": "system", 00:17:46.572 "dma_device_type": 1 00:17:46.572 }, 00:17:46.572 { 00:17:46.572 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:46.572 "dma_device_type": 2 00:17:46.572 } 00:17:46.572 ], 00:17:46.572 "driver_specific": {} 00:17:46.572 } 00:17:46.572 ] 00:17:46.572 12:09:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.572 12:09:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:17:46.572 12:09:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:46.572 12:09:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:46.572 12:09:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:46.572 12:09:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:46.572 12:09:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:46.572 12:09:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:46.572 12:09:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:46.572 12:09:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:46.572 12:09:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:46.572 12:09:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:46.572 12:09:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:46.572 12:09:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:46.572 12:09:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.572 12:09:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:46.572 12:09:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.572 12:09:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:46.572 12:09:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.572 12:09:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:46.572 "name": "Existed_Raid", 00:17:46.572 "uuid": "0ca3b4d4-d115-4a17-bac3-cf3b34f2337a", 00:17:46.572 "strip_size_kb": 0, 00:17:46.572 "state": "online", 00:17:46.572 "raid_level": "raid1", 00:17:46.572 "superblock": true, 00:17:46.572 "num_base_bdevs": 2, 00:17:46.572 "num_base_bdevs_discovered": 2, 00:17:46.572 "num_base_bdevs_operational": 2, 00:17:46.572 "base_bdevs_list": [ 00:17:46.572 { 00:17:46.572 "name": "BaseBdev1", 00:17:46.572 "uuid": "48c8992d-4bc5-4f78-b439-d94fa40d571a", 00:17:46.572 "is_configured": true, 00:17:46.572 "data_offset": 256, 00:17:46.572 "data_size": 7936 00:17:46.572 }, 00:17:46.572 { 00:17:46.572 "name": "BaseBdev2", 00:17:46.572 "uuid": "a6ac3dbe-f834-4ca0-8bca-cb6ff02dc5a0", 00:17:46.572 "is_configured": true, 00:17:46.572 "data_offset": 256, 00:17:46.572 "data_size": 7936 00:17:46.572 } 00:17:46.572 ] 00:17:46.572 }' 00:17:46.572 12:09:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:46.572 12:09:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.142 12:09:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:47.142 12:09:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:47.142 12:09:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:47.142 12:09:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:47.142 12:09:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:47.142 12:09:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:47.142 12:09:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:47.142 12:09:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.142 12:09:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.142 12:09:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:47.142 [2024-11-19 12:09:50.309179] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:47.142 12:09:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.142 12:09:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:47.142 "name": "Existed_Raid", 00:17:47.142 "aliases": [ 00:17:47.142 "0ca3b4d4-d115-4a17-bac3-cf3b34f2337a" 00:17:47.142 ], 00:17:47.142 "product_name": "Raid Volume", 00:17:47.142 "block_size": 4096, 00:17:47.142 "num_blocks": 7936, 00:17:47.142 "uuid": "0ca3b4d4-d115-4a17-bac3-cf3b34f2337a", 00:17:47.142 "md_size": 32, 00:17:47.142 "md_interleave": false, 00:17:47.142 "dif_type": 0, 00:17:47.142 "assigned_rate_limits": { 00:17:47.142 "rw_ios_per_sec": 0, 00:17:47.142 "rw_mbytes_per_sec": 0, 00:17:47.142 "r_mbytes_per_sec": 0, 00:17:47.142 "w_mbytes_per_sec": 0 00:17:47.142 }, 00:17:47.142 "claimed": false, 00:17:47.142 "zoned": false, 00:17:47.142 "supported_io_types": { 00:17:47.142 "read": true, 00:17:47.142 "write": true, 00:17:47.142 "unmap": false, 00:17:47.142 "flush": false, 00:17:47.142 "reset": true, 00:17:47.142 "nvme_admin": false, 00:17:47.142 "nvme_io": false, 00:17:47.142 "nvme_io_md": false, 00:17:47.142 "write_zeroes": true, 00:17:47.142 "zcopy": false, 00:17:47.142 "get_zone_info": false, 00:17:47.142 "zone_management": false, 00:17:47.142 "zone_append": false, 00:17:47.142 "compare": false, 00:17:47.142 "compare_and_write": false, 00:17:47.142 "abort": false, 00:17:47.142 "seek_hole": false, 00:17:47.142 "seek_data": false, 00:17:47.142 "copy": false, 00:17:47.142 "nvme_iov_md": false 00:17:47.142 }, 00:17:47.142 "memory_domains": [ 00:17:47.142 { 00:17:47.142 "dma_device_id": "system", 00:17:47.142 "dma_device_type": 1 00:17:47.142 }, 00:17:47.142 { 00:17:47.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:47.142 "dma_device_type": 2 00:17:47.142 }, 00:17:47.142 { 00:17:47.142 "dma_device_id": "system", 00:17:47.142 "dma_device_type": 1 00:17:47.142 }, 00:17:47.142 { 00:17:47.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:47.142 "dma_device_type": 2 00:17:47.142 } 00:17:47.142 ], 00:17:47.142 "driver_specific": { 00:17:47.142 "raid": { 00:17:47.142 "uuid": "0ca3b4d4-d115-4a17-bac3-cf3b34f2337a", 00:17:47.142 "strip_size_kb": 0, 00:17:47.142 "state": "online", 00:17:47.142 "raid_level": "raid1", 00:17:47.142 "superblock": true, 00:17:47.142 "num_base_bdevs": 2, 00:17:47.142 "num_base_bdevs_discovered": 2, 00:17:47.142 "num_base_bdevs_operational": 2, 00:17:47.142 "base_bdevs_list": [ 00:17:47.142 { 00:17:47.142 "name": "BaseBdev1", 00:17:47.142 "uuid": "48c8992d-4bc5-4f78-b439-d94fa40d571a", 00:17:47.142 "is_configured": true, 00:17:47.142 "data_offset": 256, 00:17:47.142 "data_size": 7936 00:17:47.142 }, 00:17:47.142 { 00:17:47.142 "name": "BaseBdev2", 00:17:47.142 "uuid": "a6ac3dbe-f834-4ca0-8bca-cb6ff02dc5a0", 00:17:47.142 "is_configured": true, 00:17:47.142 "data_offset": 256, 00:17:47.142 "data_size": 7936 00:17:47.142 } 00:17:47.142 ] 00:17:47.142 } 00:17:47.142 } 00:17:47.142 }' 00:17:47.143 12:09:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:47.143 12:09:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:47.143 BaseBdev2' 00:17:47.143 12:09:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:47.143 12:09:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:47.143 12:09:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:47.143 12:09:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:47.143 12:09:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.143 12:09:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.143 12:09:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:47.143 12:09:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.143 12:09:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:47.143 12:09:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:47.143 12:09:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:47.143 12:09:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:47.143 12:09:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:47.143 12:09:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.143 12:09:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.143 12:09:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.403 12:09:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:47.403 12:09:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:47.403 12:09:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:47.403 12:09:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.403 12:09:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.403 [2024-11-19 12:09:50.548516] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:47.403 12:09:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.403 12:09:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:47.403 12:09:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:47.403 12:09:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:47.403 12:09:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:17:47.403 12:09:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:47.403 12:09:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:47.403 12:09:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:47.403 12:09:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:47.403 12:09:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:47.403 12:09:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:47.403 12:09:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:47.403 12:09:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:47.403 12:09:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:47.403 12:09:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:47.403 12:09:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:47.403 12:09:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.403 12:09:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:47.403 12:09:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.403 12:09:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.403 12:09:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.403 12:09:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:47.403 "name": "Existed_Raid", 00:17:47.403 "uuid": "0ca3b4d4-d115-4a17-bac3-cf3b34f2337a", 00:17:47.403 "strip_size_kb": 0, 00:17:47.403 "state": "online", 00:17:47.403 "raid_level": "raid1", 00:17:47.403 "superblock": true, 00:17:47.403 "num_base_bdevs": 2, 00:17:47.403 "num_base_bdevs_discovered": 1, 00:17:47.403 "num_base_bdevs_operational": 1, 00:17:47.403 "base_bdevs_list": [ 00:17:47.403 { 00:17:47.403 "name": null, 00:17:47.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.403 "is_configured": false, 00:17:47.403 "data_offset": 0, 00:17:47.403 "data_size": 7936 00:17:47.403 }, 00:17:47.403 { 00:17:47.403 "name": "BaseBdev2", 00:17:47.403 "uuid": "a6ac3dbe-f834-4ca0-8bca-cb6ff02dc5a0", 00:17:47.403 "is_configured": true, 00:17:47.403 "data_offset": 256, 00:17:47.403 "data_size": 7936 00:17:47.403 } 00:17:47.403 ] 00:17:47.403 }' 00:17:47.403 12:09:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:47.403 12:09:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.973 12:09:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:47.973 12:09:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:47.973 12:09:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.974 12:09:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.974 12:09:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.974 12:09:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:47.974 12:09:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.974 12:09:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:47.974 12:09:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:47.974 12:09:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:47.974 12:09:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.974 12:09:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.974 [2024-11-19 12:09:51.111287] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:47.974 [2024-11-19 12:09:51.111391] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:47.974 [2024-11-19 12:09:51.210353] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:47.974 [2024-11-19 12:09:51.210399] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:47.974 [2024-11-19 12:09:51.210410] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:47.974 12:09:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.974 12:09:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:47.974 12:09:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:47.974 12:09:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.974 12:09:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.974 12:09:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:47.974 12:09:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.974 12:09:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.974 12:09:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:47.974 12:09:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:47.974 12:09:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:17:47.974 12:09:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87114 00:17:47.974 12:09:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87114 ']' 00:17:47.974 12:09:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87114 00:17:47.974 12:09:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:17:47.974 12:09:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:47.974 12:09:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87114 00:17:47.974 12:09:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:47.974 12:09:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:47.974 killing process with pid 87114 00:17:47.974 12:09:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87114' 00:17:47.974 12:09:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87114 00:17:47.974 [2024-11-19 12:09:51.305081] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:47.974 12:09:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87114 00:17:47.974 [2024-11-19 12:09:51.321082] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:49.356 12:09:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:17:49.356 00:17:49.356 real 0m4.888s 00:17:49.356 user 0m7.027s 00:17:49.356 sys 0m0.879s 00:17:49.356 12:09:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:49.356 12:09:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.356 ************************************ 00:17:49.356 END TEST raid_state_function_test_sb_md_separate 00:17:49.356 ************************************ 00:17:49.356 12:09:52 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:17:49.356 12:09:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:49.356 12:09:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:49.356 12:09:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:49.356 ************************************ 00:17:49.356 START TEST raid_superblock_test_md_separate 00:17:49.356 ************************************ 00:17:49.356 12:09:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:17:49.356 12:09:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:49.356 12:09:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:49.356 12:09:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:49.356 12:09:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:49.356 12:09:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:49.356 12:09:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:49.356 12:09:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:49.356 12:09:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:49.356 12:09:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:49.356 12:09:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:49.356 12:09:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:49.356 12:09:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:49.356 12:09:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:49.356 12:09:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:49.356 12:09:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:49.356 12:09:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=87360 00:17:49.356 12:09:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 87360 00:17:49.356 12:09:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:49.356 12:09:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87360 ']' 00:17:49.356 12:09:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:49.356 12:09:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:49.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:49.356 12:09:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:49.356 12:09:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:49.356 12:09:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.356 [2024-11-19 12:09:52.549619] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:17:49.356 [2024-11-19 12:09:52.549751] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87360 ] 00:17:49.356 [2024-11-19 12:09:52.728427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.616 [2024-11-19 12:09:52.837792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:49.876 [2024-11-19 12:09:53.024870] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:49.876 [2024-11-19 12:09:53.024929] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:50.137 12:09:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:50.137 12:09:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:17:50.137 12:09:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:50.137 12:09:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:50.137 12:09:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:50.137 12:09:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:50.137 12:09:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:50.137 12:09:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:50.137 12:09:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:50.137 12:09:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:50.137 12:09:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:17:50.137 12:09:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.137 12:09:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.137 malloc1 00:17:50.137 12:09:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.137 12:09:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:50.137 12:09:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.137 12:09:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.137 [2024-11-19 12:09:53.400577] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:50.137 [2024-11-19 12:09:53.400632] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:50.137 [2024-11-19 12:09:53.400652] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:50.137 [2024-11-19 12:09:53.400662] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:50.137 [2024-11-19 12:09:53.402535] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:50.137 [2024-11-19 12:09:53.402572] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:50.137 pt1 00:17:50.137 12:09:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.137 12:09:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:50.137 12:09:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:50.137 12:09:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:50.137 12:09:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:50.137 12:09:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:50.137 12:09:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:50.137 12:09:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:50.137 12:09:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:50.137 12:09:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:17:50.137 12:09:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.137 12:09:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.137 malloc2 00:17:50.137 12:09:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.137 12:09:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:50.137 12:09:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.137 12:09:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.137 [2024-11-19 12:09:53.455078] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:50.137 [2024-11-19 12:09:53.455151] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:50.138 [2024-11-19 12:09:53.455171] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:50.138 [2024-11-19 12:09:53.455195] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:50.138 [2024-11-19 12:09:53.457052] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:50.138 [2024-11-19 12:09:53.457083] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:50.138 pt2 00:17:50.138 12:09:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.138 12:09:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:50.138 12:09:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:50.138 12:09:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:17:50.138 12:09:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.138 12:09:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.138 [2024-11-19 12:09:53.467112] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:50.138 [2024-11-19 12:09:53.468838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:50.138 [2024-11-19 12:09:53.469027] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:50.138 [2024-11-19 12:09:53.469049] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:50.138 [2024-11-19 12:09:53.469141] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:50.138 [2024-11-19 12:09:53.469259] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:50.138 [2024-11-19 12:09:53.469278] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:50.138 [2024-11-19 12:09:53.469388] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:50.138 12:09:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.138 12:09:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:50.138 12:09:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:50.138 12:09:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:50.138 12:09:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:50.138 12:09:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:50.138 12:09:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:50.138 12:09:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.138 12:09:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.138 12:09:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.138 12:09:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.138 12:09:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.138 12:09:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.138 12:09:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.138 12:09:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.138 12:09:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.398 12:09:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.398 "name": "raid_bdev1", 00:17:50.398 "uuid": "71cad694-fa8e-4620-80e3-8204c245dea0", 00:17:50.398 "strip_size_kb": 0, 00:17:50.398 "state": "online", 00:17:50.398 "raid_level": "raid1", 00:17:50.398 "superblock": true, 00:17:50.398 "num_base_bdevs": 2, 00:17:50.398 "num_base_bdevs_discovered": 2, 00:17:50.398 "num_base_bdevs_operational": 2, 00:17:50.398 "base_bdevs_list": [ 00:17:50.398 { 00:17:50.398 "name": "pt1", 00:17:50.398 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:50.398 "is_configured": true, 00:17:50.398 "data_offset": 256, 00:17:50.398 "data_size": 7936 00:17:50.398 }, 00:17:50.398 { 00:17:50.398 "name": "pt2", 00:17:50.398 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:50.398 "is_configured": true, 00:17:50.398 "data_offset": 256, 00:17:50.398 "data_size": 7936 00:17:50.398 } 00:17:50.398 ] 00:17:50.398 }' 00:17:50.398 12:09:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.398 12:09:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.659 12:09:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:50.659 12:09:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:50.659 12:09:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:50.659 12:09:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:50.659 12:09:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:50.659 12:09:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:50.659 12:09:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:50.659 12:09:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:50.659 12:09:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.659 12:09:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.659 [2024-11-19 12:09:53.930499] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:50.659 12:09:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.659 12:09:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:50.659 "name": "raid_bdev1", 00:17:50.659 "aliases": [ 00:17:50.659 "71cad694-fa8e-4620-80e3-8204c245dea0" 00:17:50.659 ], 00:17:50.659 "product_name": "Raid Volume", 00:17:50.659 "block_size": 4096, 00:17:50.659 "num_blocks": 7936, 00:17:50.659 "uuid": "71cad694-fa8e-4620-80e3-8204c245dea0", 00:17:50.659 "md_size": 32, 00:17:50.659 "md_interleave": false, 00:17:50.659 "dif_type": 0, 00:17:50.659 "assigned_rate_limits": { 00:17:50.659 "rw_ios_per_sec": 0, 00:17:50.659 "rw_mbytes_per_sec": 0, 00:17:50.659 "r_mbytes_per_sec": 0, 00:17:50.659 "w_mbytes_per_sec": 0 00:17:50.659 }, 00:17:50.659 "claimed": false, 00:17:50.659 "zoned": false, 00:17:50.659 "supported_io_types": { 00:17:50.659 "read": true, 00:17:50.659 "write": true, 00:17:50.659 "unmap": false, 00:17:50.659 "flush": false, 00:17:50.659 "reset": true, 00:17:50.659 "nvme_admin": false, 00:17:50.659 "nvme_io": false, 00:17:50.659 "nvme_io_md": false, 00:17:50.659 "write_zeroes": true, 00:17:50.659 "zcopy": false, 00:17:50.659 "get_zone_info": false, 00:17:50.659 "zone_management": false, 00:17:50.659 "zone_append": false, 00:17:50.659 "compare": false, 00:17:50.659 "compare_and_write": false, 00:17:50.659 "abort": false, 00:17:50.659 "seek_hole": false, 00:17:50.659 "seek_data": false, 00:17:50.659 "copy": false, 00:17:50.659 "nvme_iov_md": false 00:17:50.659 }, 00:17:50.659 "memory_domains": [ 00:17:50.659 { 00:17:50.659 "dma_device_id": "system", 00:17:50.659 "dma_device_type": 1 00:17:50.659 }, 00:17:50.659 { 00:17:50.659 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:50.659 "dma_device_type": 2 00:17:50.659 }, 00:17:50.659 { 00:17:50.659 "dma_device_id": "system", 00:17:50.659 "dma_device_type": 1 00:17:50.659 }, 00:17:50.659 { 00:17:50.659 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:50.659 "dma_device_type": 2 00:17:50.659 } 00:17:50.659 ], 00:17:50.659 "driver_specific": { 00:17:50.659 "raid": { 00:17:50.659 "uuid": "71cad694-fa8e-4620-80e3-8204c245dea0", 00:17:50.659 "strip_size_kb": 0, 00:17:50.659 "state": "online", 00:17:50.659 "raid_level": "raid1", 00:17:50.659 "superblock": true, 00:17:50.659 "num_base_bdevs": 2, 00:17:50.659 "num_base_bdevs_discovered": 2, 00:17:50.660 "num_base_bdevs_operational": 2, 00:17:50.660 "base_bdevs_list": [ 00:17:50.660 { 00:17:50.660 "name": "pt1", 00:17:50.660 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:50.660 "is_configured": true, 00:17:50.660 "data_offset": 256, 00:17:50.660 "data_size": 7936 00:17:50.660 }, 00:17:50.660 { 00:17:50.660 "name": "pt2", 00:17:50.660 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:50.660 "is_configured": true, 00:17:50.660 "data_offset": 256, 00:17:50.660 "data_size": 7936 00:17:50.660 } 00:17:50.660 ] 00:17:50.660 } 00:17:50.660 } 00:17:50.660 }' 00:17:50.660 12:09:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:50.660 12:09:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:50.660 pt2' 00:17:50.660 12:09:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:50.920 12:09:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:50.920 12:09:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:50.920 12:09:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:50.920 12:09:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:50.920 12:09:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.920 12:09:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.920 12:09:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.920 12:09:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:50.920 12:09:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:50.920 12:09:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:50.920 12:09:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:50.920 12:09:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.920 12:09:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.920 12:09:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:50.920 12:09:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.920 12:09:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:50.920 12:09:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:50.920 12:09:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:50.920 12:09:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:50.920 12:09:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.920 12:09:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.920 [2024-11-19 12:09:54.166067] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:50.920 12:09:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.920 12:09:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=71cad694-fa8e-4620-80e3-8204c245dea0 00:17:50.920 12:09:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 71cad694-fa8e-4620-80e3-8204c245dea0 ']' 00:17:50.920 12:09:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:50.920 12:09:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.920 12:09:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.920 [2024-11-19 12:09:54.193767] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:50.920 [2024-11-19 12:09:54.193793] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:50.921 [2024-11-19 12:09:54.193865] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:50.921 [2024-11-19 12:09:54.193915] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:50.921 [2024-11-19 12:09:54.193926] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:50.921 12:09:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.921 12:09:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:50.921 12:09:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.921 12:09:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.921 12:09:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.921 12:09:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.921 12:09:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:50.921 12:09:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:50.921 12:09:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:50.921 12:09:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:50.921 12:09:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.921 12:09:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.921 12:09:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.921 12:09:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:50.921 12:09:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:50.921 12:09:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.921 12:09:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.921 12:09:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.921 12:09:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:50.921 12:09:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.921 12:09:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.921 12:09:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:50.921 12:09:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.181 12:09:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:51.181 12:09:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:51.181 12:09:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:17:51.181 12:09:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:51.181 12:09:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:51.181 12:09:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:51.181 12:09:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:51.181 12:09:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:51.182 12:09:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:51.182 12:09:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.182 12:09:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.182 [2024-11-19 12:09:54.313572] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:51.182 [2024-11-19 12:09:54.315303] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:51.182 [2024-11-19 12:09:54.315378] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:51.182 [2024-11-19 12:09:54.315424] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:51.182 [2024-11-19 12:09:54.315438] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:51.182 [2024-11-19 12:09:54.315448] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:51.182 request: 00:17:51.182 { 00:17:51.182 "name": "raid_bdev1", 00:17:51.182 "raid_level": "raid1", 00:17:51.182 "base_bdevs": [ 00:17:51.182 "malloc1", 00:17:51.182 "malloc2" 00:17:51.182 ], 00:17:51.182 "superblock": false, 00:17:51.182 "method": "bdev_raid_create", 00:17:51.182 "req_id": 1 00:17:51.182 } 00:17:51.182 Got JSON-RPC error response 00:17:51.182 response: 00:17:51.182 { 00:17:51.182 "code": -17, 00:17:51.182 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:51.182 } 00:17:51.182 12:09:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:51.182 12:09:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:17:51.182 12:09:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:51.182 12:09:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:51.182 12:09:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:51.182 12:09:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:51.182 12:09:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.182 12:09:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.182 12:09:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.182 12:09:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.182 12:09:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:51.182 12:09:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:51.182 12:09:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:51.182 12:09:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.182 12:09:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.182 [2024-11-19 12:09:54.365465] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:51.182 [2024-11-19 12:09:54.365510] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:51.182 [2024-11-19 12:09:54.365522] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:51.182 [2024-11-19 12:09:54.365532] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:51.182 [2024-11-19 12:09:54.367341] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:51.182 [2024-11-19 12:09:54.367377] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:51.182 [2024-11-19 12:09:54.367414] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:51.182 [2024-11-19 12:09:54.367460] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:51.182 pt1 00:17:51.182 12:09:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.182 12:09:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:51.182 12:09:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:51.182 12:09:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:51.182 12:09:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:51.182 12:09:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:51.182 12:09:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:51.182 12:09:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:51.182 12:09:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:51.182 12:09:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:51.182 12:09:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:51.182 12:09:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.182 12:09:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.182 12:09:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.182 12:09:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.182 12:09:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.182 12:09:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:51.182 "name": "raid_bdev1", 00:17:51.182 "uuid": "71cad694-fa8e-4620-80e3-8204c245dea0", 00:17:51.182 "strip_size_kb": 0, 00:17:51.182 "state": "configuring", 00:17:51.182 "raid_level": "raid1", 00:17:51.182 "superblock": true, 00:17:51.182 "num_base_bdevs": 2, 00:17:51.182 "num_base_bdevs_discovered": 1, 00:17:51.182 "num_base_bdevs_operational": 2, 00:17:51.182 "base_bdevs_list": [ 00:17:51.182 { 00:17:51.182 "name": "pt1", 00:17:51.182 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:51.182 "is_configured": true, 00:17:51.182 "data_offset": 256, 00:17:51.182 "data_size": 7936 00:17:51.182 }, 00:17:51.182 { 00:17:51.182 "name": null, 00:17:51.182 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:51.182 "is_configured": false, 00:17:51.182 "data_offset": 256, 00:17:51.182 "data_size": 7936 00:17:51.182 } 00:17:51.182 ] 00:17:51.182 }' 00:17:51.182 12:09:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:51.182 12:09:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.753 12:09:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:17:51.753 12:09:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:51.753 12:09:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:51.753 12:09:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:51.753 12:09:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.753 12:09:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.753 [2024-11-19 12:09:54.840767] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:51.753 [2024-11-19 12:09:54.840872] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:51.753 [2024-11-19 12:09:54.840901] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:51.753 [2024-11-19 12:09:54.840917] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:51.753 [2024-11-19 12:09:54.841261] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:51.753 [2024-11-19 12:09:54.841296] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:51.753 [2024-11-19 12:09:54.841369] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:51.753 [2024-11-19 12:09:54.841408] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:51.753 [2024-11-19 12:09:54.841565] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:51.753 [2024-11-19 12:09:54.841589] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:51.753 [2024-11-19 12:09:54.841680] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:51.753 [2024-11-19 12:09:54.841843] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:51.753 [2024-11-19 12:09:54.841861] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:51.753 [2024-11-19 12:09:54.841981] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:51.753 pt2 00:17:51.753 12:09:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.753 12:09:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:51.753 12:09:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:51.753 12:09:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:51.753 12:09:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:51.753 12:09:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:51.753 12:09:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:51.753 12:09:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:51.753 12:09:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:51.753 12:09:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:51.753 12:09:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:51.753 12:09:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:51.753 12:09:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:51.753 12:09:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.753 12:09:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.753 12:09:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.753 12:09:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.753 12:09:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.753 12:09:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:51.753 "name": "raid_bdev1", 00:17:51.753 "uuid": "71cad694-fa8e-4620-80e3-8204c245dea0", 00:17:51.753 "strip_size_kb": 0, 00:17:51.753 "state": "online", 00:17:51.753 "raid_level": "raid1", 00:17:51.753 "superblock": true, 00:17:51.753 "num_base_bdevs": 2, 00:17:51.753 "num_base_bdevs_discovered": 2, 00:17:51.753 "num_base_bdevs_operational": 2, 00:17:51.753 "base_bdevs_list": [ 00:17:51.753 { 00:17:51.753 "name": "pt1", 00:17:51.753 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:51.753 "is_configured": true, 00:17:51.753 "data_offset": 256, 00:17:51.753 "data_size": 7936 00:17:51.753 }, 00:17:51.753 { 00:17:51.753 "name": "pt2", 00:17:51.753 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:51.753 "is_configured": true, 00:17:51.753 "data_offset": 256, 00:17:51.753 "data_size": 7936 00:17:51.753 } 00:17:51.753 ] 00:17:51.753 }' 00:17:51.753 12:09:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:51.753 12:09:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:52.013 12:09:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:52.013 12:09:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:52.013 12:09:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:52.013 12:09:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:52.013 12:09:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:52.013 12:09:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:52.013 12:09:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:52.013 12:09:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:52.013 12:09:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.013 12:09:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:52.013 [2024-11-19 12:09:55.344093] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:52.013 12:09:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.013 12:09:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:52.013 "name": "raid_bdev1", 00:17:52.013 "aliases": [ 00:17:52.013 "71cad694-fa8e-4620-80e3-8204c245dea0" 00:17:52.013 ], 00:17:52.013 "product_name": "Raid Volume", 00:17:52.013 "block_size": 4096, 00:17:52.013 "num_blocks": 7936, 00:17:52.013 "uuid": "71cad694-fa8e-4620-80e3-8204c245dea0", 00:17:52.013 "md_size": 32, 00:17:52.013 "md_interleave": false, 00:17:52.013 "dif_type": 0, 00:17:52.013 "assigned_rate_limits": { 00:17:52.013 "rw_ios_per_sec": 0, 00:17:52.013 "rw_mbytes_per_sec": 0, 00:17:52.013 "r_mbytes_per_sec": 0, 00:17:52.013 "w_mbytes_per_sec": 0 00:17:52.013 }, 00:17:52.013 "claimed": false, 00:17:52.013 "zoned": false, 00:17:52.013 "supported_io_types": { 00:17:52.013 "read": true, 00:17:52.013 "write": true, 00:17:52.013 "unmap": false, 00:17:52.013 "flush": false, 00:17:52.013 "reset": true, 00:17:52.013 "nvme_admin": false, 00:17:52.013 "nvme_io": false, 00:17:52.013 "nvme_io_md": false, 00:17:52.013 "write_zeroes": true, 00:17:52.013 "zcopy": false, 00:17:52.013 "get_zone_info": false, 00:17:52.013 "zone_management": false, 00:17:52.013 "zone_append": false, 00:17:52.013 "compare": false, 00:17:52.013 "compare_and_write": false, 00:17:52.013 "abort": false, 00:17:52.013 "seek_hole": false, 00:17:52.013 "seek_data": false, 00:17:52.013 "copy": false, 00:17:52.013 "nvme_iov_md": false 00:17:52.013 }, 00:17:52.013 "memory_domains": [ 00:17:52.013 { 00:17:52.014 "dma_device_id": "system", 00:17:52.014 "dma_device_type": 1 00:17:52.014 }, 00:17:52.014 { 00:17:52.014 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:52.014 "dma_device_type": 2 00:17:52.014 }, 00:17:52.014 { 00:17:52.014 "dma_device_id": "system", 00:17:52.014 "dma_device_type": 1 00:17:52.014 }, 00:17:52.014 { 00:17:52.014 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:52.014 "dma_device_type": 2 00:17:52.014 } 00:17:52.014 ], 00:17:52.014 "driver_specific": { 00:17:52.014 "raid": { 00:17:52.014 "uuid": "71cad694-fa8e-4620-80e3-8204c245dea0", 00:17:52.014 "strip_size_kb": 0, 00:17:52.014 "state": "online", 00:17:52.014 "raid_level": "raid1", 00:17:52.014 "superblock": true, 00:17:52.014 "num_base_bdevs": 2, 00:17:52.014 "num_base_bdevs_discovered": 2, 00:17:52.014 "num_base_bdevs_operational": 2, 00:17:52.014 "base_bdevs_list": [ 00:17:52.014 { 00:17:52.014 "name": "pt1", 00:17:52.014 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:52.014 "is_configured": true, 00:17:52.014 "data_offset": 256, 00:17:52.014 "data_size": 7936 00:17:52.014 }, 00:17:52.014 { 00:17:52.014 "name": "pt2", 00:17:52.014 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:52.014 "is_configured": true, 00:17:52.014 "data_offset": 256, 00:17:52.014 "data_size": 7936 00:17:52.014 } 00:17:52.014 ] 00:17:52.014 } 00:17:52.014 } 00:17:52.014 }' 00:17:52.014 12:09:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:52.273 12:09:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:52.273 pt2' 00:17:52.273 12:09:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:52.273 12:09:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:52.273 12:09:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:52.273 12:09:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:52.273 12:09:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:52.273 12:09:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.273 12:09:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:52.273 12:09:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.273 12:09:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:52.273 12:09:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:52.274 12:09:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:52.274 12:09:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:52.274 12:09:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:52.274 12:09:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.274 12:09:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:52.274 12:09:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.274 12:09:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:52.274 12:09:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:52.274 12:09:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:52.274 12:09:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:52.274 12:09:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.274 12:09:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:52.274 [2024-11-19 12:09:55.543729] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:52.274 12:09:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.274 12:09:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 71cad694-fa8e-4620-80e3-8204c245dea0 '!=' 71cad694-fa8e-4620-80e3-8204c245dea0 ']' 00:17:52.274 12:09:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:52.274 12:09:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:52.274 12:09:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:17:52.274 12:09:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:52.274 12:09:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.274 12:09:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:52.274 [2024-11-19 12:09:55.571467] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:52.274 12:09:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.274 12:09:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:52.274 12:09:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:52.274 12:09:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:52.274 12:09:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:52.274 12:09:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:52.274 12:09:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:52.274 12:09:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:52.274 12:09:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:52.274 12:09:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:52.274 12:09:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:52.274 12:09:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.274 12:09:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.274 12:09:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.274 12:09:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:52.274 12:09:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.274 12:09:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:52.274 "name": "raid_bdev1", 00:17:52.274 "uuid": "71cad694-fa8e-4620-80e3-8204c245dea0", 00:17:52.274 "strip_size_kb": 0, 00:17:52.274 "state": "online", 00:17:52.274 "raid_level": "raid1", 00:17:52.274 "superblock": true, 00:17:52.274 "num_base_bdevs": 2, 00:17:52.274 "num_base_bdevs_discovered": 1, 00:17:52.274 "num_base_bdevs_operational": 1, 00:17:52.274 "base_bdevs_list": [ 00:17:52.274 { 00:17:52.274 "name": null, 00:17:52.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.274 "is_configured": false, 00:17:52.274 "data_offset": 0, 00:17:52.274 "data_size": 7936 00:17:52.274 }, 00:17:52.274 { 00:17:52.274 "name": "pt2", 00:17:52.274 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:52.274 "is_configured": true, 00:17:52.274 "data_offset": 256, 00:17:52.274 "data_size": 7936 00:17:52.274 } 00:17:52.274 ] 00:17:52.274 }' 00:17:52.274 12:09:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:52.274 12:09:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:52.844 12:09:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:52.844 12:09:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.844 12:09:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:52.844 [2024-11-19 12:09:56.022911] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:52.844 [2024-11-19 12:09:56.022943] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:52.844 [2024-11-19 12:09:56.023038] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:52.844 [2024-11-19 12:09:56.023089] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:52.844 [2024-11-19 12:09:56.023103] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:52.844 12:09:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.844 12:09:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.844 12:09:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.844 12:09:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:52.844 12:09:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:52.844 12:09:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.844 12:09:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:52.844 12:09:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:52.844 12:09:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:52.844 12:09:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:52.844 12:09:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:52.844 12:09:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.844 12:09:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:52.844 12:09:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.844 12:09:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:52.844 12:09:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:52.844 12:09:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:52.844 12:09:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:52.844 12:09:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:17:52.844 12:09:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:52.844 12:09:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.844 12:09:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:52.844 [2024-11-19 12:09:56.098790] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:52.844 [2024-11-19 12:09:56.098858] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:52.844 [2024-11-19 12:09:56.098876] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:52.844 [2024-11-19 12:09:56.098890] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:52.844 [2024-11-19 12:09:56.101110] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:52.844 [2024-11-19 12:09:56.101156] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:52.844 [2024-11-19 12:09:56.101208] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:52.844 [2024-11-19 12:09:56.101261] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:52.844 [2024-11-19 12:09:56.101361] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:52.844 [2024-11-19 12:09:56.101381] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:52.844 [2024-11-19 12:09:56.101463] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:52.844 [2024-11-19 12:09:56.101591] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:52.844 [2024-11-19 12:09:56.101605] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:52.844 [2024-11-19 12:09:56.101703] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:52.844 pt2 00:17:52.844 12:09:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.844 12:09:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:52.844 12:09:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:52.844 12:09:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:52.844 12:09:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:52.844 12:09:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:52.844 12:09:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:52.844 12:09:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:52.844 12:09:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:52.844 12:09:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:52.844 12:09:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:52.844 12:09:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.844 12:09:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.844 12:09:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:52.844 12:09:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.844 12:09:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.844 12:09:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:52.844 "name": "raid_bdev1", 00:17:52.844 "uuid": "71cad694-fa8e-4620-80e3-8204c245dea0", 00:17:52.844 "strip_size_kb": 0, 00:17:52.844 "state": "online", 00:17:52.844 "raid_level": "raid1", 00:17:52.844 "superblock": true, 00:17:52.844 "num_base_bdevs": 2, 00:17:52.844 "num_base_bdevs_discovered": 1, 00:17:52.844 "num_base_bdevs_operational": 1, 00:17:52.844 "base_bdevs_list": [ 00:17:52.844 { 00:17:52.844 "name": null, 00:17:52.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.844 "is_configured": false, 00:17:52.844 "data_offset": 256, 00:17:52.844 "data_size": 7936 00:17:52.844 }, 00:17:52.844 { 00:17:52.844 "name": "pt2", 00:17:52.844 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:52.844 "is_configured": true, 00:17:52.844 "data_offset": 256, 00:17:52.844 "data_size": 7936 00:17:52.844 } 00:17:52.844 ] 00:17:52.844 }' 00:17:52.844 12:09:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:52.844 12:09:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:53.416 12:09:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:53.416 12:09:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.416 12:09:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:53.416 [2024-11-19 12:09:56.557944] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:53.416 [2024-11-19 12:09:56.557974] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:53.416 [2024-11-19 12:09:56.558067] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:53.416 [2024-11-19 12:09:56.558113] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:53.416 [2024-11-19 12:09:56.558123] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:53.416 12:09:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.416 12:09:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.416 12:09:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:53.416 12:09:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.416 12:09:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:53.416 12:09:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.416 12:09:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:53.416 12:09:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:53.416 12:09:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:17:53.416 12:09:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:53.416 12:09:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.416 12:09:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:53.416 [2024-11-19 12:09:56.621878] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:53.416 [2024-11-19 12:09:56.621927] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:53.416 [2024-11-19 12:09:56.621946] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:17:53.416 [2024-11-19 12:09:56.621956] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:53.416 [2024-11-19 12:09:56.624070] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:53.416 [2024-11-19 12:09:56.624106] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:53.416 [2024-11-19 12:09:56.624158] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:53.416 [2024-11-19 12:09:56.624204] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:53.416 [2024-11-19 12:09:56.624322] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:53.416 [2024-11-19 12:09:56.624338] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:53.416 [2024-11-19 12:09:56.624357] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:53.416 [2024-11-19 12:09:56.624425] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:53.416 [2024-11-19 12:09:56.624505] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:53.416 [2024-11-19 12:09:56.624514] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:53.416 [2024-11-19 12:09:56.624584] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:53.416 [2024-11-19 12:09:56.624696] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:53.416 [2024-11-19 12:09:56.624714] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:53.416 [2024-11-19 12:09:56.624830] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:53.416 pt1 00:17:53.416 12:09:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.416 12:09:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:17:53.416 12:09:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:53.416 12:09:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:53.416 12:09:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:53.416 12:09:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:53.416 12:09:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:53.416 12:09:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:53.416 12:09:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:53.416 12:09:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:53.416 12:09:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:53.416 12:09:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:53.416 12:09:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.416 12:09:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.416 12:09:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.416 12:09:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:53.416 12:09:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.416 12:09:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:53.416 "name": "raid_bdev1", 00:17:53.416 "uuid": "71cad694-fa8e-4620-80e3-8204c245dea0", 00:17:53.416 "strip_size_kb": 0, 00:17:53.416 "state": "online", 00:17:53.416 "raid_level": "raid1", 00:17:53.416 "superblock": true, 00:17:53.416 "num_base_bdevs": 2, 00:17:53.416 "num_base_bdevs_discovered": 1, 00:17:53.416 "num_base_bdevs_operational": 1, 00:17:53.416 "base_bdevs_list": [ 00:17:53.416 { 00:17:53.416 "name": null, 00:17:53.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.416 "is_configured": false, 00:17:53.416 "data_offset": 256, 00:17:53.416 "data_size": 7936 00:17:53.416 }, 00:17:53.416 { 00:17:53.416 "name": "pt2", 00:17:53.416 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:53.416 "is_configured": true, 00:17:53.416 "data_offset": 256, 00:17:53.416 "data_size": 7936 00:17:53.416 } 00:17:53.416 ] 00:17:53.416 }' 00:17:53.416 12:09:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:53.416 12:09:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:53.676 12:09:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:53.676 12:09:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:53.676 12:09:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.676 12:09:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:53.676 12:09:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.937 12:09:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:53.937 12:09:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:53.937 12:09:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:53.937 12:09:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.937 12:09:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:53.937 [2024-11-19 12:09:57.061317] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:53.937 12:09:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.937 12:09:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 71cad694-fa8e-4620-80e3-8204c245dea0 '!=' 71cad694-fa8e-4620-80e3-8204c245dea0 ']' 00:17:53.937 12:09:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 87360 00:17:53.937 12:09:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87360 ']' 00:17:53.937 12:09:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 87360 00:17:53.937 12:09:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:17:53.937 12:09:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:53.937 12:09:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87360 00:17:53.937 12:09:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:53.937 12:09:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:53.937 killing process with pid 87360 00:17:53.937 12:09:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87360' 00:17:53.937 12:09:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 87360 00:17:53.937 [2024-11-19 12:09:57.141193] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:53.937 [2024-11-19 12:09:57.141278] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:53.937 [2024-11-19 12:09:57.141334] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:53.937 [2024-11-19 12:09:57.141354] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:53.937 12:09:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 87360 00:17:54.197 [2024-11-19 12:09:57.368138] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:55.580 12:09:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:17:55.580 00:17:55.580 real 0m6.081s 00:17:55.580 user 0m9.088s 00:17:55.580 sys 0m1.151s 00:17:55.580 12:09:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:55.580 12:09:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.580 ************************************ 00:17:55.580 END TEST raid_superblock_test_md_separate 00:17:55.580 ************************************ 00:17:55.580 12:09:58 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:17:55.580 12:09:58 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:17:55.580 12:09:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:55.580 12:09:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:55.580 12:09:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:55.580 ************************************ 00:17:55.580 START TEST raid_rebuild_test_sb_md_separate 00:17:55.580 ************************************ 00:17:55.580 12:09:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:17:55.580 12:09:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:55.580 12:09:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:55.580 12:09:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:55.580 12:09:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:55.580 12:09:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:55.580 12:09:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:55.580 12:09:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:55.580 12:09:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:55.580 12:09:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:55.580 12:09:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:55.580 12:09:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:55.580 12:09:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:55.580 12:09:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:55.580 12:09:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:55.580 12:09:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:55.580 12:09:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:55.580 12:09:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:55.580 12:09:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:55.580 12:09:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:55.580 12:09:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:55.580 12:09:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:55.580 12:09:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:55.580 12:09:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:55.580 12:09:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:55.580 12:09:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=87683 00:17:55.580 12:09:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:55.580 12:09:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 87683 00:17:55.580 12:09:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87683 ']' 00:17:55.580 12:09:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:55.580 12:09:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:55.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:55.580 12:09:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:55.580 12:09:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:55.580 12:09:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.580 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:55.580 Zero copy mechanism will not be used. 00:17:55.580 [2024-11-19 12:09:58.719349] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:17:55.580 [2024-11-19 12:09:58.719474] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87683 ] 00:17:55.580 [2024-11-19 12:09:58.906941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.840 [2024-11-19 12:09:59.042937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:56.099 [2024-11-19 12:09:59.274357] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:56.099 [2024-11-19 12:09:59.274420] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:56.359 12:09:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:56.359 12:09:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:17:56.359 12:09:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:56.359 12:09:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:17:56.359 12:09:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.359 12:09:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:56.359 BaseBdev1_malloc 00:17:56.359 12:09:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.359 12:09:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:56.359 12:09:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.359 12:09:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:56.359 [2024-11-19 12:09:59.568947] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:56.359 [2024-11-19 12:09:59.569040] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:56.359 [2024-11-19 12:09:59.569071] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:56.359 [2024-11-19 12:09:59.569086] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:56.359 [2024-11-19 12:09:59.571214] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:56.359 [2024-11-19 12:09:59.571258] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:56.359 BaseBdev1 00:17:56.359 12:09:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.359 12:09:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:56.359 12:09:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:17:56.359 12:09:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.359 12:09:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:56.359 BaseBdev2_malloc 00:17:56.359 12:09:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.359 12:09:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:56.359 12:09:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.359 12:09:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:56.359 [2024-11-19 12:09:59.625564] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:56.359 [2024-11-19 12:09:59.625634] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:56.359 [2024-11-19 12:09:59.625658] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:56.359 [2024-11-19 12:09:59.625672] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:56.359 [2024-11-19 12:09:59.627721] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:56.359 [2024-11-19 12:09:59.627761] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:56.359 BaseBdev2 00:17:56.359 12:09:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.359 12:09:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:17:56.359 12:09:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.359 12:09:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:56.359 spare_malloc 00:17:56.359 12:09:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.359 12:09:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:56.359 12:09:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.359 12:09:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:56.359 spare_delay 00:17:56.359 12:09:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.359 12:09:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:56.359 12:09:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.359 12:09:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:56.359 [2024-11-19 12:09:59.709552] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:56.359 [2024-11-19 12:09:59.709618] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:56.359 [2024-11-19 12:09:59.709640] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:56.359 [2024-11-19 12:09:59.709654] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:56.359 [2024-11-19 12:09:59.711778] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:56.359 [2024-11-19 12:09:59.711820] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:56.359 spare 00:17:56.359 12:09:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.359 12:09:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:56.359 12:09:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.359 12:09:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:56.359 [2024-11-19 12:09:59.721578] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:56.359 [2024-11-19 12:09:59.723547] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:56.359 [2024-11-19 12:09:59.723740] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:56.359 [2024-11-19 12:09:59.723765] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:56.359 [2024-11-19 12:09:59.723839] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:56.359 [2024-11-19 12:09:59.723983] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:56.360 [2024-11-19 12:09:59.724009] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:56.360 [2024-11-19 12:09:59.724123] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:56.360 12:09:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.360 12:09:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:56.360 12:09:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:56.360 12:09:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:56.360 12:09:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:56.360 12:09:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:56.360 12:09:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:56.360 12:09:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:56.360 12:09:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:56.360 12:09:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:56.360 12:09:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:56.619 12:09:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.619 12:09:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.619 12:09:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.619 12:09:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:56.619 12:09:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.619 12:09:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:56.619 "name": "raid_bdev1", 00:17:56.619 "uuid": "30eaa65d-5b06-467f-98c9-e53a954425fc", 00:17:56.619 "strip_size_kb": 0, 00:17:56.619 "state": "online", 00:17:56.619 "raid_level": "raid1", 00:17:56.619 "superblock": true, 00:17:56.619 "num_base_bdevs": 2, 00:17:56.619 "num_base_bdevs_discovered": 2, 00:17:56.620 "num_base_bdevs_operational": 2, 00:17:56.620 "base_bdevs_list": [ 00:17:56.620 { 00:17:56.620 "name": "BaseBdev1", 00:17:56.620 "uuid": "ad798af0-73f3-5713-a03d-5c7057790ee5", 00:17:56.620 "is_configured": true, 00:17:56.620 "data_offset": 256, 00:17:56.620 "data_size": 7936 00:17:56.620 }, 00:17:56.620 { 00:17:56.620 "name": "BaseBdev2", 00:17:56.620 "uuid": "7f742f8e-79f2-5447-a333-54dd2ce73506", 00:17:56.620 "is_configured": true, 00:17:56.620 "data_offset": 256, 00:17:56.620 "data_size": 7936 00:17:56.620 } 00:17:56.620 ] 00:17:56.620 }' 00:17:56.620 12:09:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:56.620 12:09:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:56.880 12:10:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:56.880 12:10:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:56.880 12:10:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.880 12:10:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:56.880 [2024-11-19 12:10:00.197055] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:56.880 12:10:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.880 12:10:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:17:56.880 12:10:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.880 12:10:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:56.880 12:10:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.880 12:10:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.141 12:10:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.141 12:10:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:17:57.141 12:10:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:57.141 12:10:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:57.141 12:10:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:57.141 12:10:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:57.142 12:10:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:57.142 12:10:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:57.142 12:10:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:57.142 12:10:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:57.142 12:10:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:57.142 12:10:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:17:57.142 12:10:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:57.142 12:10:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:57.142 12:10:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:57.142 [2024-11-19 12:10:00.464444] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:57.142 /dev/nbd0 00:17:57.142 12:10:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:57.142 12:10:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:57.142 12:10:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:57.142 12:10:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:17:57.142 12:10:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:57.142 12:10:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:57.142 12:10:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:57.142 12:10:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:17:57.142 12:10:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:57.142 12:10:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:57.142 12:10:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:57.402 1+0 records in 00:17:57.402 1+0 records out 00:17:57.402 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000425608 s, 9.6 MB/s 00:17:57.402 12:10:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:57.402 12:10:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:17:57.402 12:10:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:57.402 12:10:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:57.402 12:10:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:17:57.402 12:10:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:57.402 12:10:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:57.402 12:10:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:17:57.402 12:10:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:17:57.402 12:10:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:17:57.974 7936+0 records in 00:17:57.974 7936+0 records out 00:17:57.974 32505856 bytes (33 MB, 31 MiB) copied, 0.606161 s, 53.6 MB/s 00:17:57.974 12:10:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:57.974 12:10:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:57.974 12:10:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:57.974 12:10:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:57.974 12:10:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:17:57.974 12:10:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:57.974 12:10:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:58.234 12:10:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:58.234 [2024-11-19 12:10:01.349219] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:58.234 12:10:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:58.234 12:10:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:58.234 12:10:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:58.234 12:10:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:58.234 12:10:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:58.234 12:10:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:17:58.234 12:10:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:17:58.234 12:10:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:58.234 12:10:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.234 12:10:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:58.234 [2024-11-19 12:10:01.370944] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:58.234 12:10:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.234 12:10:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:58.234 12:10:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:58.234 12:10:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:58.234 12:10:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:58.234 12:10:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:58.234 12:10:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:58.234 12:10:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:58.234 12:10:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:58.234 12:10:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:58.234 12:10:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:58.234 12:10:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.235 12:10:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.235 12:10:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.235 12:10:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:58.235 12:10:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.235 12:10:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:58.235 "name": "raid_bdev1", 00:17:58.235 "uuid": "30eaa65d-5b06-467f-98c9-e53a954425fc", 00:17:58.235 "strip_size_kb": 0, 00:17:58.235 "state": "online", 00:17:58.235 "raid_level": "raid1", 00:17:58.235 "superblock": true, 00:17:58.235 "num_base_bdevs": 2, 00:17:58.235 "num_base_bdevs_discovered": 1, 00:17:58.235 "num_base_bdevs_operational": 1, 00:17:58.235 "base_bdevs_list": [ 00:17:58.235 { 00:17:58.235 "name": null, 00:17:58.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.235 "is_configured": false, 00:17:58.235 "data_offset": 0, 00:17:58.235 "data_size": 7936 00:17:58.235 }, 00:17:58.235 { 00:17:58.235 "name": "BaseBdev2", 00:17:58.235 "uuid": "7f742f8e-79f2-5447-a333-54dd2ce73506", 00:17:58.235 "is_configured": true, 00:17:58.235 "data_offset": 256, 00:17:58.235 "data_size": 7936 00:17:58.235 } 00:17:58.235 ] 00:17:58.235 }' 00:17:58.235 12:10:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:58.235 12:10:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:58.495 12:10:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:58.495 12:10:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.495 12:10:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:58.495 [2024-11-19 12:10:01.822126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:58.495 [2024-11-19 12:10:01.837962] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:17:58.495 12:10:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.495 12:10:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:58.495 [2024-11-19 12:10:01.839712] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:59.879 12:10:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:59.879 12:10:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:59.879 12:10:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:59.879 12:10:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:59.879 12:10:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:59.879 12:10:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.879 12:10:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.879 12:10:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.879 12:10:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:59.879 12:10:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.879 12:10:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:59.879 "name": "raid_bdev1", 00:17:59.879 "uuid": "30eaa65d-5b06-467f-98c9-e53a954425fc", 00:17:59.879 "strip_size_kb": 0, 00:17:59.879 "state": "online", 00:17:59.879 "raid_level": "raid1", 00:17:59.879 "superblock": true, 00:17:59.879 "num_base_bdevs": 2, 00:17:59.879 "num_base_bdevs_discovered": 2, 00:17:59.879 "num_base_bdevs_operational": 2, 00:17:59.879 "process": { 00:17:59.879 "type": "rebuild", 00:17:59.879 "target": "spare", 00:17:59.879 "progress": { 00:17:59.879 "blocks": 2560, 00:17:59.879 "percent": 32 00:17:59.879 } 00:17:59.879 }, 00:17:59.879 "base_bdevs_list": [ 00:17:59.879 { 00:17:59.879 "name": "spare", 00:17:59.879 "uuid": "8acc2983-fe53-5aa9-8c1f-d3a270f15633", 00:17:59.879 "is_configured": true, 00:17:59.879 "data_offset": 256, 00:17:59.879 "data_size": 7936 00:17:59.879 }, 00:17:59.879 { 00:17:59.879 "name": "BaseBdev2", 00:17:59.879 "uuid": "7f742f8e-79f2-5447-a333-54dd2ce73506", 00:17:59.879 "is_configured": true, 00:17:59.879 "data_offset": 256, 00:17:59.879 "data_size": 7936 00:17:59.879 } 00:17:59.879 ] 00:17:59.879 }' 00:17:59.879 12:10:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:59.879 12:10:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:59.879 12:10:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:59.879 12:10:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:59.879 12:10:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:59.879 12:10:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.879 12:10:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:59.879 [2024-11-19 12:10:03.007464] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:59.879 [2024-11-19 12:10:03.044452] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:59.879 [2024-11-19 12:10:03.044508] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:59.879 [2024-11-19 12:10:03.044538] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:59.879 [2024-11-19 12:10:03.044547] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:59.879 12:10:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.879 12:10:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:59.879 12:10:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:59.879 12:10:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:59.879 12:10:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:59.879 12:10:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:59.879 12:10:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:59.879 12:10:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:59.879 12:10:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:59.879 12:10:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:59.879 12:10:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:59.879 12:10:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.879 12:10:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.879 12:10:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.879 12:10:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:59.879 12:10:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.879 12:10:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:59.879 "name": "raid_bdev1", 00:17:59.879 "uuid": "30eaa65d-5b06-467f-98c9-e53a954425fc", 00:17:59.879 "strip_size_kb": 0, 00:17:59.879 "state": "online", 00:17:59.879 "raid_level": "raid1", 00:17:59.879 "superblock": true, 00:17:59.879 "num_base_bdevs": 2, 00:17:59.879 "num_base_bdevs_discovered": 1, 00:17:59.879 "num_base_bdevs_operational": 1, 00:17:59.879 "base_bdevs_list": [ 00:17:59.879 { 00:17:59.879 "name": null, 00:17:59.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.879 "is_configured": false, 00:17:59.879 "data_offset": 0, 00:17:59.879 "data_size": 7936 00:17:59.879 }, 00:17:59.879 { 00:17:59.879 "name": "BaseBdev2", 00:17:59.879 "uuid": "7f742f8e-79f2-5447-a333-54dd2ce73506", 00:17:59.879 "is_configured": true, 00:17:59.879 "data_offset": 256, 00:17:59.879 "data_size": 7936 00:17:59.879 } 00:17:59.879 ] 00:17:59.879 }' 00:17:59.880 12:10:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:59.880 12:10:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.451 12:10:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:00.451 12:10:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:00.451 12:10:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:00.451 12:10:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:00.451 12:10:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:00.451 12:10:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.451 12:10:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.451 12:10:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.451 12:10:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.451 12:10:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.451 12:10:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:00.451 "name": "raid_bdev1", 00:18:00.451 "uuid": "30eaa65d-5b06-467f-98c9-e53a954425fc", 00:18:00.451 "strip_size_kb": 0, 00:18:00.451 "state": "online", 00:18:00.451 "raid_level": "raid1", 00:18:00.451 "superblock": true, 00:18:00.451 "num_base_bdevs": 2, 00:18:00.451 "num_base_bdevs_discovered": 1, 00:18:00.451 "num_base_bdevs_operational": 1, 00:18:00.451 "base_bdevs_list": [ 00:18:00.451 { 00:18:00.451 "name": null, 00:18:00.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.451 "is_configured": false, 00:18:00.451 "data_offset": 0, 00:18:00.451 "data_size": 7936 00:18:00.451 }, 00:18:00.451 { 00:18:00.451 "name": "BaseBdev2", 00:18:00.451 "uuid": "7f742f8e-79f2-5447-a333-54dd2ce73506", 00:18:00.451 "is_configured": true, 00:18:00.451 "data_offset": 256, 00:18:00.451 "data_size": 7936 00:18:00.451 } 00:18:00.451 ] 00:18:00.451 }' 00:18:00.451 12:10:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:00.451 12:10:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:00.451 12:10:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:00.451 12:10:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:00.451 12:10:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:00.451 12:10:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.451 12:10:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.451 [2024-11-19 12:10:03.670497] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:00.451 [2024-11-19 12:10:03.684326] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:18:00.451 12:10:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.451 12:10:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:00.451 [2024-11-19 12:10:03.686093] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:01.392 12:10:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:01.392 12:10:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:01.392 12:10:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:01.392 12:10:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:01.392 12:10:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:01.392 12:10:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.392 12:10:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.392 12:10:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.392 12:10:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.392 12:10:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.392 12:10:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:01.392 "name": "raid_bdev1", 00:18:01.392 "uuid": "30eaa65d-5b06-467f-98c9-e53a954425fc", 00:18:01.392 "strip_size_kb": 0, 00:18:01.392 "state": "online", 00:18:01.392 "raid_level": "raid1", 00:18:01.392 "superblock": true, 00:18:01.392 "num_base_bdevs": 2, 00:18:01.392 "num_base_bdevs_discovered": 2, 00:18:01.392 "num_base_bdevs_operational": 2, 00:18:01.392 "process": { 00:18:01.392 "type": "rebuild", 00:18:01.392 "target": "spare", 00:18:01.392 "progress": { 00:18:01.392 "blocks": 2560, 00:18:01.392 "percent": 32 00:18:01.392 } 00:18:01.392 }, 00:18:01.392 "base_bdevs_list": [ 00:18:01.392 { 00:18:01.392 "name": "spare", 00:18:01.392 "uuid": "8acc2983-fe53-5aa9-8c1f-d3a270f15633", 00:18:01.392 "is_configured": true, 00:18:01.392 "data_offset": 256, 00:18:01.392 "data_size": 7936 00:18:01.392 }, 00:18:01.392 { 00:18:01.392 "name": "BaseBdev2", 00:18:01.392 "uuid": "7f742f8e-79f2-5447-a333-54dd2ce73506", 00:18:01.392 "is_configured": true, 00:18:01.392 "data_offset": 256, 00:18:01.392 "data_size": 7936 00:18:01.392 } 00:18:01.392 ] 00:18:01.392 }' 00:18:01.392 12:10:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:01.652 12:10:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:01.652 12:10:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:01.652 12:10:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:01.652 12:10:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:01.652 12:10:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:01.652 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:01.652 12:10:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:01.652 12:10:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:01.652 12:10:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:01.652 12:10:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=697 00:18:01.653 12:10:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:01.653 12:10:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:01.653 12:10:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:01.653 12:10:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:01.653 12:10:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:01.653 12:10:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:01.653 12:10:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.653 12:10:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.653 12:10:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.653 12:10:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.653 12:10:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.653 12:10:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:01.653 "name": "raid_bdev1", 00:18:01.653 "uuid": "30eaa65d-5b06-467f-98c9-e53a954425fc", 00:18:01.653 "strip_size_kb": 0, 00:18:01.653 "state": "online", 00:18:01.653 "raid_level": "raid1", 00:18:01.653 "superblock": true, 00:18:01.653 "num_base_bdevs": 2, 00:18:01.653 "num_base_bdevs_discovered": 2, 00:18:01.653 "num_base_bdevs_operational": 2, 00:18:01.653 "process": { 00:18:01.653 "type": "rebuild", 00:18:01.653 "target": "spare", 00:18:01.653 "progress": { 00:18:01.653 "blocks": 2816, 00:18:01.653 "percent": 35 00:18:01.653 } 00:18:01.653 }, 00:18:01.653 "base_bdevs_list": [ 00:18:01.653 { 00:18:01.653 "name": "spare", 00:18:01.653 "uuid": "8acc2983-fe53-5aa9-8c1f-d3a270f15633", 00:18:01.653 "is_configured": true, 00:18:01.653 "data_offset": 256, 00:18:01.653 "data_size": 7936 00:18:01.653 }, 00:18:01.653 { 00:18:01.653 "name": "BaseBdev2", 00:18:01.653 "uuid": "7f742f8e-79f2-5447-a333-54dd2ce73506", 00:18:01.653 "is_configured": true, 00:18:01.653 "data_offset": 256, 00:18:01.653 "data_size": 7936 00:18:01.653 } 00:18:01.653 ] 00:18:01.653 }' 00:18:01.653 12:10:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:01.653 12:10:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:01.653 12:10:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:01.653 12:10:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:01.653 12:10:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:03.036 12:10:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:03.036 12:10:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:03.036 12:10:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:03.036 12:10:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:03.036 12:10:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:03.036 12:10:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:03.036 12:10:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.036 12:10:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.036 12:10:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.036 12:10:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:03.036 12:10:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.036 12:10:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:03.036 "name": "raid_bdev1", 00:18:03.036 "uuid": "30eaa65d-5b06-467f-98c9-e53a954425fc", 00:18:03.036 "strip_size_kb": 0, 00:18:03.036 "state": "online", 00:18:03.036 "raid_level": "raid1", 00:18:03.036 "superblock": true, 00:18:03.036 "num_base_bdevs": 2, 00:18:03.036 "num_base_bdevs_discovered": 2, 00:18:03.036 "num_base_bdevs_operational": 2, 00:18:03.036 "process": { 00:18:03.036 "type": "rebuild", 00:18:03.036 "target": "spare", 00:18:03.037 "progress": { 00:18:03.037 "blocks": 5888, 00:18:03.037 "percent": 74 00:18:03.037 } 00:18:03.037 }, 00:18:03.037 "base_bdevs_list": [ 00:18:03.037 { 00:18:03.037 "name": "spare", 00:18:03.037 "uuid": "8acc2983-fe53-5aa9-8c1f-d3a270f15633", 00:18:03.037 "is_configured": true, 00:18:03.037 "data_offset": 256, 00:18:03.037 "data_size": 7936 00:18:03.037 }, 00:18:03.037 { 00:18:03.037 "name": "BaseBdev2", 00:18:03.037 "uuid": "7f742f8e-79f2-5447-a333-54dd2ce73506", 00:18:03.037 "is_configured": true, 00:18:03.037 "data_offset": 256, 00:18:03.037 "data_size": 7936 00:18:03.037 } 00:18:03.037 ] 00:18:03.037 }' 00:18:03.037 12:10:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:03.037 12:10:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:03.037 12:10:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:03.037 12:10:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:03.037 12:10:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:03.607 [2024-11-19 12:10:06.797797] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:03.607 [2024-11-19 12:10:06.797884] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:03.608 [2024-11-19 12:10:06.797975] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:03.868 12:10:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:03.868 12:10:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:03.868 12:10:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:03.868 12:10:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:03.868 12:10:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:03.868 12:10:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:03.868 12:10:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.868 12:10:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.868 12:10:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.868 12:10:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:03.868 12:10:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.868 12:10:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:03.868 "name": "raid_bdev1", 00:18:03.868 "uuid": "30eaa65d-5b06-467f-98c9-e53a954425fc", 00:18:03.868 "strip_size_kb": 0, 00:18:03.868 "state": "online", 00:18:03.868 "raid_level": "raid1", 00:18:03.868 "superblock": true, 00:18:03.868 "num_base_bdevs": 2, 00:18:03.868 "num_base_bdevs_discovered": 2, 00:18:03.868 "num_base_bdevs_operational": 2, 00:18:03.868 "base_bdevs_list": [ 00:18:03.868 { 00:18:03.868 "name": "spare", 00:18:03.868 "uuid": "8acc2983-fe53-5aa9-8c1f-d3a270f15633", 00:18:03.868 "is_configured": true, 00:18:03.868 "data_offset": 256, 00:18:03.868 "data_size": 7936 00:18:03.868 }, 00:18:03.868 { 00:18:03.868 "name": "BaseBdev2", 00:18:03.868 "uuid": "7f742f8e-79f2-5447-a333-54dd2ce73506", 00:18:03.868 "is_configured": true, 00:18:03.868 "data_offset": 256, 00:18:03.868 "data_size": 7936 00:18:03.868 } 00:18:03.868 ] 00:18:03.868 }' 00:18:03.868 12:10:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:03.868 12:10:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:03.868 12:10:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:04.128 12:10:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:04.128 12:10:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:18:04.128 12:10:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:04.128 12:10:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:04.128 12:10:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:04.128 12:10:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:04.128 12:10:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:04.128 12:10:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.128 12:10:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.128 12:10:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:04.128 12:10:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.128 12:10:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.128 12:10:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:04.128 "name": "raid_bdev1", 00:18:04.128 "uuid": "30eaa65d-5b06-467f-98c9-e53a954425fc", 00:18:04.128 "strip_size_kb": 0, 00:18:04.128 "state": "online", 00:18:04.128 "raid_level": "raid1", 00:18:04.128 "superblock": true, 00:18:04.128 "num_base_bdevs": 2, 00:18:04.128 "num_base_bdevs_discovered": 2, 00:18:04.128 "num_base_bdevs_operational": 2, 00:18:04.128 "base_bdevs_list": [ 00:18:04.128 { 00:18:04.128 "name": "spare", 00:18:04.128 "uuid": "8acc2983-fe53-5aa9-8c1f-d3a270f15633", 00:18:04.128 "is_configured": true, 00:18:04.128 "data_offset": 256, 00:18:04.128 "data_size": 7936 00:18:04.128 }, 00:18:04.128 { 00:18:04.128 "name": "BaseBdev2", 00:18:04.128 "uuid": "7f742f8e-79f2-5447-a333-54dd2ce73506", 00:18:04.128 "is_configured": true, 00:18:04.128 "data_offset": 256, 00:18:04.128 "data_size": 7936 00:18:04.128 } 00:18:04.128 ] 00:18:04.128 }' 00:18:04.128 12:10:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:04.129 12:10:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:04.129 12:10:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:04.129 12:10:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:04.129 12:10:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:04.129 12:10:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:04.129 12:10:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:04.129 12:10:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:04.129 12:10:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:04.129 12:10:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:04.129 12:10:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:04.129 12:10:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:04.129 12:10:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:04.129 12:10:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:04.129 12:10:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.129 12:10:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.129 12:10:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.129 12:10:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:04.129 12:10:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.129 12:10:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:04.129 "name": "raid_bdev1", 00:18:04.129 "uuid": "30eaa65d-5b06-467f-98c9-e53a954425fc", 00:18:04.129 "strip_size_kb": 0, 00:18:04.129 "state": "online", 00:18:04.129 "raid_level": "raid1", 00:18:04.129 "superblock": true, 00:18:04.129 "num_base_bdevs": 2, 00:18:04.129 "num_base_bdevs_discovered": 2, 00:18:04.129 "num_base_bdevs_operational": 2, 00:18:04.129 "base_bdevs_list": [ 00:18:04.129 { 00:18:04.129 "name": "spare", 00:18:04.129 "uuid": "8acc2983-fe53-5aa9-8c1f-d3a270f15633", 00:18:04.129 "is_configured": true, 00:18:04.129 "data_offset": 256, 00:18:04.129 "data_size": 7936 00:18:04.129 }, 00:18:04.129 { 00:18:04.129 "name": "BaseBdev2", 00:18:04.129 "uuid": "7f742f8e-79f2-5447-a333-54dd2ce73506", 00:18:04.129 "is_configured": true, 00:18:04.129 "data_offset": 256, 00:18:04.129 "data_size": 7936 00:18:04.129 } 00:18:04.129 ] 00:18:04.129 }' 00:18:04.129 12:10:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:04.129 12:10:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:04.700 12:10:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:04.700 12:10:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.700 12:10:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:04.700 [2024-11-19 12:10:07.822968] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:04.700 [2024-11-19 12:10:07.823010] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:04.700 [2024-11-19 12:10:07.823091] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:04.700 [2024-11-19 12:10:07.823180] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:04.700 [2024-11-19 12:10:07.823190] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:04.700 12:10:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.700 12:10:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.700 12:10:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:18:04.700 12:10:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.700 12:10:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:04.700 12:10:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.700 12:10:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:04.700 12:10:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:04.700 12:10:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:04.700 12:10:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:04.700 12:10:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:04.700 12:10:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:04.700 12:10:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:04.700 12:10:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:04.700 12:10:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:04.700 12:10:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:18:04.700 12:10:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:04.700 12:10:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:04.700 12:10:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:04.961 /dev/nbd0 00:18:04.961 12:10:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:04.961 12:10:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:04.961 12:10:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:04.961 12:10:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:18:04.961 12:10:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:04.961 12:10:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:04.961 12:10:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:04.961 12:10:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:18:04.961 12:10:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:04.961 12:10:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:04.961 12:10:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:04.961 1+0 records in 00:18:04.961 1+0 records out 00:18:04.961 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000418581 s, 9.8 MB/s 00:18:04.961 12:10:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:04.961 12:10:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:18:04.961 12:10:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:04.961 12:10:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:04.961 12:10:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:18:04.961 12:10:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:04.961 12:10:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:04.961 12:10:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:04.961 /dev/nbd1 00:18:05.222 12:10:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:05.222 12:10:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:05.222 12:10:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:05.222 12:10:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:18:05.222 12:10:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:05.222 12:10:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:05.222 12:10:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:05.222 12:10:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:18:05.222 12:10:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:05.222 12:10:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:05.222 12:10:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:05.222 1+0 records in 00:18:05.222 1+0 records out 00:18:05.222 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000459165 s, 8.9 MB/s 00:18:05.222 12:10:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:05.222 12:10:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:18:05.222 12:10:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:05.222 12:10:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:05.222 12:10:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:18:05.222 12:10:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:05.222 12:10:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:05.222 12:10:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:05.222 12:10:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:05.222 12:10:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:05.222 12:10:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:05.222 12:10:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:05.222 12:10:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:18:05.222 12:10:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:05.222 12:10:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:05.482 12:10:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:05.482 12:10:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:05.482 12:10:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:05.482 12:10:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:05.482 12:10:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:05.482 12:10:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:05.482 12:10:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:05.482 12:10:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:05.482 12:10:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:05.482 12:10:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:05.743 12:10:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:05.743 12:10:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:05.743 12:10:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:05.743 12:10:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:05.743 12:10:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:05.743 12:10:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:05.743 12:10:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:05.743 12:10:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:05.743 12:10:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:05.743 12:10:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:05.743 12:10:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.743 12:10:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:05.743 12:10:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.743 12:10:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:05.743 12:10:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.743 12:10:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:05.743 [2024-11-19 12:10:08.991160] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:05.743 [2024-11-19 12:10:08.991231] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:05.743 [2024-11-19 12:10:08.991255] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:05.743 [2024-11-19 12:10:08.991264] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:05.743 [2024-11-19 12:10:08.993282] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:05.743 [2024-11-19 12:10:08.993318] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:05.743 [2024-11-19 12:10:08.993377] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:05.743 [2024-11-19 12:10:08.993448] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:05.743 [2024-11-19 12:10:08.993577] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:05.743 spare 00:18:05.743 12:10:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.743 12:10:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:05.743 12:10:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.743 12:10:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:05.743 [2024-11-19 12:10:09.093467] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:05.743 [2024-11-19 12:10:09.093497] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:05.743 [2024-11-19 12:10:09.093594] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:18:05.743 [2024-11-19 12:10:09.093735] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:05.743 [2024-11-19 12:10:09.093762] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:05.743 [2024-11-19 12:10:09.093888] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:05.743 12:10:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.743 12:10:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:05.743 12:10:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:05.743 12:10:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:05.743 12:10:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:05.743 12:10:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:05.743 12:10:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:05.743 12:10:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:05.743 12:10:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:05.743 12:10:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:05.743 12:10:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:05.743 12:10:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.743 12:10:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.743 12:10:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.743 12:10:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.003 12:10:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.003 12:10:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:06.003 "name": "raid_bdev1", 00:18:06.003 "uuid": "30eaa65d-5b06-467f-98c9-e53a954425fc", 00:18:06.003 "strip_size_kb": 0, 00:18:06.003 "state": "online", 00:18:06.003 "raid_level": "raid1", 00:18:06.003 "superblock": true, 00:18:06.003 "num_base_bdevs": 2, 00:18:06.003 "num_base_bdevs_discovered": 2, 00:18:06.003 "num_base_bdevs_operational": 2, 00:18:06.003 "base_bdevs_list": [ 00:18:06.003 { 00:18:06.003 "name": "spare", 00:18:06.003 "uuid": "8acc2983-fe53-5aa9-8c1f-d3a270f15633", 00:18:06.003 "is_configured": true, 00:18:06.003 "data_offset": 256, 00:18:06.003 "data_size": 7936 00:18:06.003 }, 00:18:06.003 { 00:18:06.003 "name": "BaseBdev2", 00:18:06.003 "uuid": "7f742f8e-79f2-5447-a333-54dd2ce73506", 00:18:06.003 "is_configured": true, 00:18:06.003 "data_offset": 256, 00:18:06.003 "data_size": 7936 00:18:06.003 } 00:18:06.003 ] 00:18:06.003 }' 00:18:06.003 12:10:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:06.003 12:10:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.289 12:10:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:06.289 12:10:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:06.289 12:10:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:06.289 12:10:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:06.289 12:10:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:06.289 12:10:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.289 12:10:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.289 12:10:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.289 12:10:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.289 12:10:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.289 12:10:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:06.289 "name": "raid_bdev1", 00:18:06.289 "uuid": "30eaa65d-5b06-467f-98c9-e53a954425fc", 00:18:06.289 "strip_size_kb": 0, 00:18:06.289 "state": "online", 00:18:06.289 "raid_level": "raid1", 00:18:06.289 "superblock": true, 00:18:06.289 "num_base_bdevs": 2, 00:18:06.289 "num_base_bdevs_discovered": 2, 00:18:06.289 "num_base_bdevs_operational": 2, 00:18:06.289 "base_bdevs_list": [ 00:18:06.289 { 00:18:06.289 "name": "spare", 00:18:06.289 "uuid": "8acc2983-fe53-5aa9-8c1f-d3a270f15633", 00:18:06.289 "is_configured": true, 00:18:06.289 "data_offset": 256, 00:18:06.289 "data_size": 7936 00:18:06.289 }, 00:18:06.289 { 00:18:06.289 "name": "BaseBdev2", 00:18:06.289 "uuid": "7f742f8e-79f2-5447-a333-54dd2ce73506", 00:18:06.289 "is_configured": true, 00:18:06.289 "data_offset": 256, 00:18:06.290 "data_size": 7936 00:18:06.290 } 00:18:06.290 ] 00:18:06.290 }' 00:18:06.290 12:10:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:06.584 12:10:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:06.584 12:10:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:06.584 12:10:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:06.584 12:10:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.584 12:10:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.584 12:10:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.584 12:10:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:06.584 12:10:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.584 12:10:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:06.584 12:10:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:06.584 12:10:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.584 12:10:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.584 [2024-11-19 12:10:09.765842] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:06.584 12:10:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.584 12:10:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:06.584 12:10:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:06.584 12:10:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:06.584 12:10:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:06.584 12:10:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:06.584 12:10:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:06.584 12:10:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:06.584 12:10:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:06.584 12:10:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:06.584 12:10:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:06.584 12:10:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.584 12:10:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.584 12:10:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.584 12:10:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.584 12:10:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.584 12:10:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:06.584 "name": "raid_bdev1", 00:18:06.584 "uuid": "30eaa65d-5b06-467f-98c9-e53a954425fc", 00:18:06.584 "strip_size_kb": 0, 00:18:06.584 "state": "online", 00:18:06.584 "raid_level": "raid1", 00:18:06.584 "superblock": true, 00:18:06.584 "num_base_bdevs": 2, 00:18:06.584 "num_base_bdevs_discovered": 1, 00:18:06.584 "num_base_bdevs_operational": 1, 00:18:06.584 "base_bdevs_list": [ 00:18:06.584 { 00:18:06.584 "name": null, 00:18:06.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.584 "is_configured": false, 00:18:06.584 "data_offset": 0, 00:18:06.584 "data_size": 7936 00:18:06.584 }, 00:18:06.584 { 00:18:06.584 "name": "BaseBdev2", 00:18:06.584 "uuid": "7f742f8e-79f2-5447-a333-54dd2ce73506", 00:18:06.584 "is_configured": true, 00:18:06.584 "data_offset": 256, 00:18:06.584 "data_size": 7936 00:18:06.584 } 00:18:06.584 ] 00:18:06.584 }' 00:18:06.584 12:10:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:06.585 12:10:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:07.154 12:10:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:07.154 12:10:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.154 12:10:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:07.154 [2024-11-19 12:10:10.233397] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:07.154 [2024-11-19 12:10:10.233587] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:07.154 [2024-11-19 12:10:10.233610] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:07.154 [2024-11-19 12:10:10.233649] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:07.154 [2024-11-19 12:10:10.247450] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:18:07.154 12:10:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.154 12:10:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:07.154 [2024-11-19 12:10:10.249270] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:08.096 12:10:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:08.096 12:10:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:08.096 12:10:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:08.096 12:10:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:08.096 12:10:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:08.096 12:10:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.096 12:10:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.096 12:10:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.096 12:10:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.096 12:10:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.096 12:10:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:08.096 "name": "raid_bdev1", 00:18:08.096 "uuid": "30eaa65d-5b06-467f-98c9-e53a954425fc", 00:18:08.096 "strip_size_kb": 0, 00:18:08.096 "state": "online", 00:18:08.096 "raid_level": "raid1", 00:18:08.096 "superblock": true, 00:18:08.096 "num_base_bdevs": 2, 00:18:08.096 "num_base_bdevs_discovered": 2, 00:18:08.096 "num_base_bdevs_operational": 2, 00:18:08.096 "process": { 00:18:08.096 "type": "rebuild", 00:18:08.096 "target": "spare", 00:18:08.096 "progress": { 00:18:08.096 "blocks": 2560, 00:18:08.096 "percent": 32 00:18:08.096 } 00:18:08.096 }, 00:18:08.096 "base_bdevs_list": [ 00:18:08.096 { 00:18:08.096 "name": "spare", 00:18:08.096 "uuid": "8acc2983-fe53-5aa9-8c1f-d3a270f15633", 00:18:08.096 "is_configured": true, 00:18:08.096 "data_offset": 256, 00:18:08.096 "data_size": 7936 00:18:08.096 }, 00:18:08.096 { 00:18:08.096 "name": "BaseBdev2", 00:18:08.096 "uuid": "7f742f8e-79f2-5447-a333-54dd2ce73506", 00:18:08.096 "is_configured": true, 00:18:08.096 "data_offset": 256, 00:18:08.096 "data_size": 7936 00:18:08.096 } 00:18:08.096 ] 00:18:08.096 }' 00:18:08.096 12:10:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:08.096 12:10:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:08.096 12:10:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:08.096 12:10:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:08.096 12:10:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:08.096 12:10:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.096 12:10:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.096 [2024-11-19 12:10:11.409225] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:08.096 [2024-11-19 12:10:11.454053] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:08.096 [2024-11-19 12:10:11.454107] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:08.096 [2024-11-19 12:10:11.454120] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:08.096 [2024-11-19 12:10:11.454139] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:08.357 12:10:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.357 12:10:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:08.357 12:10:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:08.357 12:10:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:08.357 12:10:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:08.357 12:10:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:08.357 12:10:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:08.357 12:10:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:08.357 12:10:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:08.357 12:10:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:08.357 12:10:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:08.357 12:10:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.357 12:10:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.357 12:10:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.357 12:10:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.357 12:10:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.357 12:10:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:08.357 "name": "raid_bdev1", 00:18:08.357 "uuid": "30eaa65d-5b06-467f-98c9-e53a954425fc", 00:18:08.357 "strip_size_kb": 0, 00:18:08.357 "state": "online", 00:18:08.357 "raid_level": "raid1", 00:18:08.357 "superblock": true, 00:18:08.357 "num_base_bdevs": 2, 00:18:08.357 "num_base_bdevs_discovered": 1, 00:18:08.357 "num_base_bdevs_operational": 1, 00:18:08.357 "base_bdevs_list": [ 00:18:08.357 { 00:18:08.357 "name": null, 00:18:08.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.357 "is_configured": false, 00:18:08.357 "data_offset": 0, 00:18:08.357 "data_size": 7936 00:18:08.357 }, 00:18:08.357 { 00:18:08.357 "name": "BaseBdev2", 00:18:08.357 "uuid": "7f742f8e-79f2-5447-a333-54dd2ce73506", 00:18:08.357 "is_configured": true, 00:18:08.357 "data_offset": 256, 00:18:08.357 "data_size": 7936 00:18:08.357 } 00:18:08.357 ] 00:18:08.357 }' 00:18:08.357 12:10:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:08.357 12:10:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.618 12:10:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:08.618 12:10:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.618 12:10:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.618 [2024-11-19 12:10:11.900749] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:08.618 [2024-11-19 12:10:11.900809] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:08.618 [2024-11-19 12:10:11.900834] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:08.618 [2024-11-19 12:10:11.900844] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:08.618 [2024-11-19 12:10:11.901104] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:08.618 [2024-11-19 12:10:11.901131] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:08.618 [2024-11-19 12:10:11.901184] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:08.618 [2024-11-19 12:10:11.901200] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:08.618 [2024-11-19 12:10:11.901209] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:08.618 [2024-11-19 12:10:11.901229] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:08.618 [2024-11-19 12:10:11.915057] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:18:08.618 spare 00:18:08.618 12:10:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.618 [2024-11-19 12:10:11.916872] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:08.618 12:10:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:09.558 12:10:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:09.558 12:10:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:09.558 12:10:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:09.558 12:10:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:09.558 12:10:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:09.558 12:10:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.558 12:10:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.558 12:10:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.558 12:10:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:09.818 12:10:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.818 12:10:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:09.818 "name": "raid_bdev1", 00:18:09.818 "uuid": "30eaa65d-5b06-467f-98c9-e53a954425fc", 00:18:09.818 "strip_size_kb": 0, 00:18:09.818 "state": "online", 00:18:09.818 "raid_level": "raid1", 00:18:09.818 "superblock": true, 00:18:09.818 "num_base_bdevs": 2, 00:18:09.818 "num_base_bdevs_discovered": 2, 00:18:09.818 "num_base_bdevs_operational": 2, 00:18:09.818 "process": { 00:18:09.818 "type": "rebuild", 00:18:09.818 "target": "spare", 00:18:09.818 "progress": { 00:18:09.818 "blocks": 2560, 00:18:09.818 "percent": 32 00:18:09.818 } 00:18:09.818 }, 00:18:09.818 "base_bdevs_list": [ 00:18:09.818 { 00:18:09.818 "name": "spare", 00:18:09.818 "uuid": "8acc2983-fe53-5aa9-8c1f-d3a270f15633", 00:18:09.818 "is_configured": true, 00:18:09.818 "data_offset": 256, 00:18:09.818 "data_size": 7936 00:18:09.818 }, 00:18:09.818 { 00:18:09.818 "name": "BaseBdev2", 00:18:09.818 "uuid": "7f742f8e-79f2-5447-a333-54dd2ce73506", 00:18:09.818 "is_configured": true, 00:18:09.818 "data_offset": 256, 00:18:09.818 "data_size": 7936 00:18:09.818 } 00:18:09.818 ] 00:18:09.818 }' 00:18:09.818 12:10:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:09.818 12:10:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:09.818 12:10:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:09.818 12:10:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:09.818 12:10:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:09.818 12:10:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.818 12:10:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:09.818 [2024-11-19 12:10:13.076718] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:09.818 [2024-11-19 12:10:13.121657] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:09.818 [2024-11-19 12:10:13.121711] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:09.818 [2024-11-19 12:10:13.121727] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:09.818 [2024-11-19 12:10:13.121734] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:09.818 12:10:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.818 12:10:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:09.818 12:10:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:09.818 12:10:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:09.818 12:10:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:09.818 12:10:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:09.818 12:10:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:09.818 12:10:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:09.818 12:10:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:09.818 12:10:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:09.818 12:10:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:09.818 12:10:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.818 12:10:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.818 12:10:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:09.818 12:10:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.818 12:10:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.079 12:10:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:10.079 "name": "raid_bdev1", 00:18:10.079 "uuid": "30eaa65d-5b06-467f-98c9-e53a954425fc", 00:18:10.079 "strip_size_kb": 0, 00:18:10.079 "state": "online", 00:18:10.079 "raid_level": "raid1", 00:18:10.079 "superblock": true, 00:18:10.079 "num_base_bdevs": 2, 00:18:10.079 "num_base_bdevs_discovered": 1, 00:18:10.079 "num_base_bdevs_operational": 1, 00:18:10.079 "base_bdevs_list": [ 00:18:10.079 { 00:18:10.079 "name": null, 00:18:10.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.079 "is_configured": false, 00:18:10.079 "data_offset": 0, 00:18:10.079 "data_size": 7936 00:18:10.079 }, 00:18:10.079 { 00:18:10.079 "name": "BaseBdev2", 00:18:10.079 "uuid": "7f742f8e-79f2-5447-a333-54dd2ce73506", 00:18:10.079 "is_configured": true, 00:18:10.079 "data_offset": 256, 00:18:10.079 "data_size": 7936 00:18:10.079 } 00:18:10.079 ] 00:18:10.079 }' 00:18:10.079 12:10:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:10.079 12:10:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.339 12:10:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:10.339 12:10:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:10.339 12:10:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:10.339 12:10:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:10.339 12:10:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:10.339 12:10:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.339 12:10:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.339 12:10:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.340 12:10:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.340 12:10:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.340 12:10:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:10.340 "name": "raid_bdev1", 00:18:10.340 "uuid": "30eaa65d-5b06-467f-98c9-e53a954425fc", 00:18:10.340 "strip_size_kb": 0, 00:18:10.340 "state": "online", 00:18:10.340 "raid_level": "raid1", 00:18:10.340 "superblock": true, 00:18:10.340 "num_base_bdevs": 2, 00:18:10.340 "num_base_bdevs_discovered": 1, 00:18:10.340 "num_base_bdevs_operational": 1, 00:18:10.340 "base_bdevs_list": [ 00:18:10.340 { 00:18:10.340 "name": null, 00:18:10.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.340 "is_configured": false, 00:18:10.340 "data_offset": 0, 00:18:10.340 "data_size": 7936 00:18:10.340 }, 00:18:10.340 { 00:18:10.340 "name": "BaseBdev2", 00:18:10.340 "uuid": "7f742f8e-79f2-5447-a333-54dd2ce73506", 00:18:10.340 "is_configured": true, 00:18:10.340 "data_offset": 256, 00:18:10.340 "data_size": 7936 00:18:10.340 } 00:18:10.340 ] 00:18:10.340 }' 00:18:10.340 12:10:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:10.340 12:10:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:10.340 12:10:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:10.600 12:10:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:10.600 12:10:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:10.600 12:10:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.600 12:10:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.600 12:10:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.600 12:10:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:10.600 12:10:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.600 12:10:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.600 [2024-11-19 12:10:13.748133] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:10.600 [2024-11-19 12:10:13.748181] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:10.600 [2024-11-19 12:10:13.748205] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:10.600 [2024-11-19 12:10:13.748213] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:10.600 [2024-11-19 12:10:13.748458] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:10.600 [2024-11-19 12:10:13.748479] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:10.601 [2024-11-19 12:10:13.748525] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:10.601 [2024-11-19 12:10:13.748540] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:10.601 [2024-11-19 12:10:13.748549] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:10.601 [2024-11-19 12:10:13.748559] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:10.601 BaseBdev1 00:18:10.601 12:10:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.601 12:10:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:11.542 12:10:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:11.542 12:10:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:11.542 12:10:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:11.542 12:10:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:11.542 12:10:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:11.542 12:10:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:11.542 12:10:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:11.542 12:10:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:11.542 12:10:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:11.542 12:10:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:11.542 12:10:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.542 12:10:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.542 12:10:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.542 12:10:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:11.542 12:10:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.542 12:10:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:11.542 "name": "raid_bdev1", 00:18:11.542 "uuid": "30eaa65d-5b06-467f-98c9-e53a954425fc", 00:18:11.542 "strip_size_kb": 0, 00:18:11.542 "state": "online", 00:18:11.542 "raid_level": "raid1", 00:18:11.542 "superblock": true, 00:18:11.542 "num_base_bdevs": 2, 00:18:11.542 "num_base_bdevs_discovered": 1, 00:18:11.542 "num_base_bdevs_operational": 1, 00:18:11.542 "base_bdevs_list": [ 00:18:11.542 { 00:18:11.542 "name": null, 00:18:11.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.542 "is_configured": false, 00:18:11.542 "data_offset": 0, 00:18:11.542 "data_size": 7936 00:18:11.542 }, 00:18:11.542 { 00:18:11.542 "name": "BaseBdev2", 00:18:11.542 "uuid": "7f742f8e-79f2-5447-a333-54dd2ce73506", 00:18:11.543 "is_configured": true, 00:18:11.543 "data_offset": 256, 00:18:11.543 "data_size": 7936 00:18:11.543 } 00:18:11.543 ] 00:18:11.543 }' 00:18:11.543 12:10:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:11.543 12:10:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.114 12:10:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:12.114 12:10:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:12.114 12:10:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:12.114 12:10:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:12.114 12:10:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:12.114 12:10:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.114 12:10:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.114 12:10:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.114 12:10:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.114 12:10:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.114 12:10:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:12.114 "name": "raid_bdev1", 00:18:12.114 "uuid": "30eaa65d-5b06-467f-98c9-e53a954425fc", 00:18:12.114 "strip_size_kb": 0, 00:18:12.114 "state": "online", 00:18:12.114 "raid_level": "raid1", 00:18:12.114 "superblock": true, 00:18:12.114 "num_base_bdevs": 2, 00:18:12.114 "num_base_bdevs_discovered": 1, 00:18:12.114 "num_base_bdevs_operational": 1, 00:18:12.114 "base_bdevs_list": [ 00:18:12.114 { 00:18:12.114 "name": null, 00:18:12.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.114 "is_configured": false, 00:18:12.114 "data_offset": 0, 00:18:12.114 "data_size": 7936 00:18:12.114 }, 00:18:12.114 { 00:18:12.114 "name": "BaseBdev2", 00:18:12.114 "uuid": "7f742f8e-79f2-5447-a333-54dd2ce73506", 00:18:12.114 "is_configured": true, 00:18:12.114 "data_offset": 256, 00:18:12.114 "data_size": 7936 00:18:12.114 } 00:18:12.114 ] 00:18:12.114 }' 00:18:12.114 12:10:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:12.114 12:10:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:12.114 12:10:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:12.114 12:10:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:12.114 12:10:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:12.114 12:10:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:18:12.114 12:10:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:12.114 12:10:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:12.114 12:10:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:12.114 12:10:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:12.114 12:10:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:12.114 12:10:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:12.114 12:10:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.114 12:10:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.114 [2024-11-19 12:10:15.353761] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:12.114 [2024-11-19 12:10:15.353921] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:12.114 [2024-11-19 12:10:15.353952] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:12.114 request: 00:18:12.114 { 00:18:12.114 "base_bdev": "BaseBdev1", 00:18:12.114 "raid_bdev": "raid_bdev1", 00:18:12.114 "method": "bdev_raid_add_base_bdev", 00:18:12.114 "req_id": 1 00:18:12.114 } 00:18:12.114 Got JSON-RPC error response 00:18:12.114 response: 00:18:12.114 { 00:18:12.114 "code": -22, 00:18:12.114 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:12.114 } 00:18:12.114 12:10:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:12.114 12:10:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:18:12.114 12:10:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:12.114 12:10:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:12.114 12:10:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:12.114 12:10:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:13.054 12:10:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:13.054 12:10:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:13.054 12:10:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:13.054 12:10:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:13.054 12:10:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:13.054 12:10:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:13.054 12:10:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:13.054 12:10:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:13.054 12:10:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:13.054 12:10:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:13.054 12:10:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.054 12:10:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.054 12:10:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.054 12:10:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:13.054 12:10:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.054 12:10:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:13.054 "name": "raid_bdev1", 00:18:13.054 "uuid": "30eaa65d-5b06-467f-98c9-e53a954425fc", 00:18:13.054 "strip_size_kb": 0, 00:18:13.054 "state": "online", 00:18:13.054 "raid_level": "raid1", 00:18:13.054 "superblock": true, 00:18:13.054 "num_base_bdevs": 2, 00:18:13.054 "num_base_bdevs_discovered": 1, 00:18:13.054 "num_base_bdevs_operational": 1, 00:18:13.054 "base_bdevs_list": [ 00:18:13.054 { 00:18:13.054 "name": null, 00:18:13.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.054 "is_configured": false, 00:18:13.054 "data_offset": 0, 00:18:13.054 "data_size": 7936 00:18:13.054 }, 00:18:13.054 { 00:18:13.054 "name": "BaseBdev2", 00:18:13.054 "uuid": "7f742f8e-79f2-5447-a333-54dd2ce73506", 00:18:13.054 "is_configured": true, 00:18:13.054 "data_offset": 256, 00:18:13.054 "data_size": 7936 00:18:13.054 } 00:18:13.054 ] 00:18:13.054 }' 00:18:13.054 12:10:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:13.054 12:10:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:13.625 12:10:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:13.625 12:10:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:13.625 12:10:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:13.625 12:10:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:13.625 12:10:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:13.625 12:10:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.625 12:10:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.625 12:10:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.625 12:10:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:13.625 12:10:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.625 12:10:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:13.625 "name": "raid_bdev1", 00:18:13.625 "uuid": "30eaa65d-5b06-467f-98c9-e53a954425fc", 00:18:13.625 "strip_size_kb": 0, 00:18:13.625 "state": "online", 00:18:13.625 "raid_level": "raid1", 00:18:13.625 "superblock": true, 00:18:13.625 "num_base_bdevs": 2, 00:18:13.625 "num_base_bdevs_discovered": 1, 00:18:13.625 "num_base_bdevs_operational": 1, 00:18:13.625 "base_bdevs_list": [ 00:18:13.625 { 00:18:13.625 "name": null, 00:18:13.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.625 "is_configured": false, 00:18:13.625 "data_offset": 0, 00:18:13.625 "data_size": 7936 00:18:13.625 }, 00:18:13.625 { 00:18:13.625 "name": "BaseBdev2", 00:18:13.625 "uuid": "7f742f8e-79f2-5447-a333-54dd2ce73506", 00:18:13.625 "is_configured": true, 00:18:13.625 "data_offset": 256, 00:18:13.625 "data_size": 7936 00:18:13.625 } 00:18:13.625 ] 00:18:13.625 }' 00:18:13.625 12:10:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:13.625 12:10:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:13.625 12:10:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:13.625 12:10:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:13.625 12:10:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 87683 00:18:13.625 12:10:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87683 ']' 00:18:13.625 12:10:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87683 00:18:13.625 12:10:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:18:13.625 12:10:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:13.625 12:10:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87683 00:18:13.886 12:10:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:13.886 12:10:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:13.886 killing process with pid 87683 00:18:13.886 12:10:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87683' 00:18:13.886 12:10:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87683 00:18:13.886 Received shutdown signal, test time was about 60.000000 seconds 00:18:13.886 00:18:13.886 Latency(us) 00:18:13.886 [2024-11-19T12:10:17.263Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.886 [2024-11-19T12:10:17.263Z] =================================================================================================================== 00:18:13.886 [2024-11-19T12:10:17.263Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:13.886 [2024-11-19 12:10:17.027194] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:13.886 [2024-11-19 12:10:17.027312] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:13.886 [2024-11-19 12:10:17.027368] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:13.886 [2024-11-19 12:10:17.027380] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:13.886 12:10:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87683 00:18:14.146 [2024-11-19 12:10:17.330802] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:15.085 12:10:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:18:15.085 00:18:15.085 real 0m19.729s 00:18:15.085 user 0m25.672s 00:18:15.085 sys 0m2.829s 00:18:15.085 12:10:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:15.086 12:10:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:15.086 ************************************ 00:18:15.086 END TEST raid_rebuild_test_sb_md_separate 00:18:15.086 ************************************ 00:18:15.086 12:10:18 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:18:15.086 12:10:18 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:18:15.086 12:10:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:15.086 12:10:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:15.086 12:10:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:15.086 ************************************ 00:18:15.086 START TEST raid_state_function_test_sb_md_interleaved 00:18:15.086 ************************************ 00:18:15.086 12:10:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:18:15.086 12:10:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:15.086 12:10:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:18:15.086 12:10:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:15.086 12:10:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:15.086 12:10:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:15.086 12:10:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:15.086 12:10:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:15.086 12:10:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:15.086 12:10:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:15.086 12:10:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:15.086 12:10:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:15.086 12:10:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:15.086 12:10:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:15.086 12:10:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:15.086 12:10:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:15.086 12:10:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:15.086 12:10:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:15.086 12:10:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:15.086 12:10:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:15.086 12:10:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:15.086 12:10:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:15.086 12:10:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:15.086 12:10:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=88379 00:18:15.086 12:10:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:15.086 12:10:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88379' 00:18:15.086 Process raid pid: 88379 00:18:15.086 12:10:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 88379 00:18:15.086 12:10:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88379 ']' 00:18:15.086 12:10:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:15.086 12:10:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:15.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:15.086 12:10:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:15.086 12:10:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:15.086 12:10:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:15.346 [2024-11-19 12:10:18.514459] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:18:15.346 [2024-11-19 12:10:18.514565] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:15.346 [2024-11-19 12:10:18.693564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.606 [2024-11-19 12:10:18.799437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:15.867 [2024-11-19 12:10:19.007678] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:15.867 [2024-11-19 12:10:19.007727] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:16.125 12:10:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:16.125 12:10:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:18:16.125 12:10:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:16.125 12:10:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.125 12:10:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.125 [2024-11-19 12:10:19.324559] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:16.125 [2024-11-19 12:10:19.324607] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:16.125 [2024-11-19 12:10:19.324619] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:16.125 [2024-11-19 12:10:19.324629] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:16.125 12:10:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.126 12:10:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:16.126 12:10:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:16.126 12:10:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:16.126 12:10:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:16.126 12:10:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:16.126 12:10:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:16.126 12:10:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:16.126 12:10:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:16.126 12:10:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:16.126 12:10:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:16.126 12:10:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.126 12:10:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:16.126 12:10:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.126 12:10:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.126 12:10:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.126 12:10:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:16.126 "name": "Existed_Raid", 00:18:16.126 "uuid": "4be55d4a-c8e7-4c43-a443-dd4b964bf4f8", 00:18:16.126 "strip_size_kb": 0, 00:18:16.126 "state": "configuring", 00:18:16.126 "raid_level": "raid1", 00:18:16.126 "superblock": true, 00:18:16.126 "num_base_bdevs": 2, 00:18:16.126 "num_base_bdevs_discovered": 0, 00:18:16.126 "num_base_bdevs_operational": 2, 00:18:16.126 "base_bdevs_list": [ 00:18:16.126 { 00:18:16.126 "name": "BaseBdev1", 00:18:16.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.126 "is_configured": false, 00:18:16.126 "data_offset": 0, 00:18:16.126 "data_size": 0 00:18:16.126 }, 00:18:16.126 { 00:18:16.126 "name": "BaseBdev2", 00:18:16.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.126 "is_configured": false, 00:18:16.126 "data_offset": 0, 00:18:16.126 "data_size": 0 00:18:16.126 } 00:18:16.126 ] 00:18:16.126 }' 00:18:16.126 12:10:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:16.126 12:10:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.695 12:10:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:16.695 12:10:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.695 12:10:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.695 [2024-11-19 12:10:19.787691] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:16.695 [2024-11-19 12:10:19.787727] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:16.695 12:10:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.695 12:10:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:16.695 12:10:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.695 12:10:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.695 [2024-11-19 12:10:19.799676] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:16.695 [2024-11-19 12:10:19.799715] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:16.695 [2024-11-19 12:10:19.799723] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:16.695 [2024-11-19 12:10:19.799734] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:16.695 12:10:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.695 12:10:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:18:16.695 12:10:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.695 12:10:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.695 [2024-11-19 12:10:19.847891] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:16.695 BaseBdev1 00:18:16.695 12:10:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.695 12:10:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:16.695 12:10:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:16.695 12:10:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:16.695 12:10:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:18:16.695 12:10:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:16.695 12:10:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:16.695 12:10:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:16.695 12:10:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.695 12:10:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.695 12:10:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.695 12:10:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:16.695 12:10:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.695 12:10:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.695 [ 00:18:16.695 { 00:18:16.695 "name": "BaseBdev1", 00:18:16.695 "aliases": [ 00:18:16.695 "92778848-86b7-4a40-ae15-1b83edccd9ed" 00:18:16.695 ], 00:18:16.695 "product_name": "Malloc disk", 00:18:16.695 "block_size": 4128, 00:18:16.695 "num_blocks": 8192, 00:18:16.695 "uuid": "92778848-86b7-4a40-ae15-1b83edccd9ed", 00:18:16.695 "md_size": 32, 00:18:16.695 "md_interleave": true, 00:18:16.695 "dif_type": 0, 00:18:16.695 "assigned_rate_limits": { 00:18:16.695 "rw_ios_per_sec": 0, 00:18:16.695 "rw_mbytes_per_sec": 0, 00:18:16.695 "r_mbytes_per_sec": 0, 00:18:16.695 "w_mbytes_per_sec": 0 00:18:16.695 }, 00:18:16.695 "claimed": true, 00:18:16.695 "claim_type": "exclusive_write", 00:18:16.695 "zoned": false, 00:18:16.695 "supported_io_types": { 00:18:16.695 "read": true, 00:18:16.695 "write": true, 00:18:16.695 "unmap": true, 00:18:16.695 "flush": true, 00:18:16.695 "reset": true, 00:18:16.695 "nvme_admin": false, 00:18:16.695 "nvme_io": false, 00:18:16.695 "nvme_io_md": false, 00:18:16.695 "write_zeroes": true, 00:18:16.695 "zcopy": true, 00:18:16.695 "get_zone_info": false, 00:18:16.695 "zone_management": false, 00:18:16.695 "zone_append": false, 00:18:16.695 "compare": false, 00:18:16.695 "compare_and_write": false, 00:18:16.695 "abort": true, 00:18:16.695 "seek_hole": false, 00:18:16.695 "seek_data": false, 00:18:16.695 "copy": true, 00:18:16.696 "nvme_iov_md": false 00:18:16.696 }, 00:18:16.696 "memory_domains": [ 00:18:16.696 { 00:18:16.696 "dma_device_id": "system", 00:18:16.696 "dma_device_type": 1 00:18:16.696 }, 00:18:16.696 { 00:18:16.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:16.696 "dma_device_type": 2 00:18:16.696 } 00:18:16.696 ], 00:18:16.696 "driver_specific": {} 00:18:16.696 } 00:18:16.696 ] 00:18:16.696 12:10:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.696 12:10:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:18:16.696 12:10:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:16.696 12:10:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:16.696 12:10:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:16.696 12:10:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:16.696 12:10:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:16.696 12:10:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:16.696 12:10:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:16.696 12:10:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:16.696 12:10:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:16.696 12:10:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:16.696 12:10:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.696 12:10:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.696 12:10:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:16.696 12:10:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.696 12:10:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.696 12:10:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:16.696 "name": "Existed_Raid", 00:18:16.696 "uuid": "d7234f0d-9fa5-4dcb-a833-d4e359944ce7", 00:18:16.696 "strip_size_kb": 0, 00:18:16.696 "state": "configuring", 00:18:16.696 "raid_level": "raid1", 00:18:16.696 "superblock": true, 00:18:16.696 "num_base_bdevs": 2, 00:18:16.696 "num_base_bdevs_discovered": 1, 00:18:16.696 "num_base_bdevs_operational": 2, 00:18:16.696 "base_bdevs_list": [ 00:18:16.696 { 00:18:16.696 "name": "BaseBdev1", 00:18:16.696 "uuid": "92778848-86b7-4a40-ae15-1b83edccd9ed", 00:18:16.696 "is_configured": true, 00:18:16.696 "data_offset": 256, 00:18:16.696 "data_size": 7936 00:18:16.696 }, 00:18:16.696 { 00:18:16.696 "name": "BaseBdev2", 00:18:16.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.696 "is_configured": false, 00:18:16.696 "data_offset": 0, 00:18:16.696 "data_size": 0 00:18:16.696 } 00:18:16.696 ] 00:18:16.696 }' 00:18:16.696 12:10:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:16.696 12:10:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.267 12:10:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:17.267 12:10:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.267 12:10:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.267 [2024-11-19 12:10:20.387040] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:17.267 [2024-11-19 12:10:20.387077] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:17.267 12:10:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.267 12:10:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:17.267 12:10:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.267 12:10:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.267 [2024-11-19 12:10:20.399108] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:17.267 [2024-11-19 12:10:20.400940] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:17.267 [2024-11-19 12:10:20.400983] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:17.267 12:10:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.267 12:10:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:17.267 12:10:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:17.267 12:10:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:17.267 12:10:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:17.267 12:10:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:17.267 12:10:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:17.267 12:10:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:17.267 12:10:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:17.267 12:10:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:17.267 12:10:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:17.267 12:10:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:17.267 12:10:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:17.267 12:10:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.267 12:10:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:17.267 12:10:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.267 12:10:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.267 12:10:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.267 12:10:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:17.267 "name": "Existed_Raid", 00:18:17.267 "uuid": "20943e74-8ab4-4641-bcca-36375ebcfde9", 00:18:17.267 "strip_size_kb": 0, 00:18:17.267 "state": "configuring", 00:18:17.267 "raid_level": "raid1", 00:18:17.267 "superblock": true, 00:18:17.267 "num_base_bdevs": 2, 00:18:17.267 "num_base_bdevs_discovered": 1, 00:18:17.268 "num_base_bdevs_operational": 2, 00:18:17.268 "base_bdevs_list": [ 00:18:17.268 { 00:18:17.268 "name": "BaseBdev1", 00:18:17.268 "uuid": "92778848-86b7-4a40-ae15-1b83edccd9ed", 00:18:17.268 "is_configured": true, 00:18:17.268 "data_offset": 256, 00:18:17.268 "data_size": 7936 00:18:17.268 }, 00:18:17.268 { 00:18:17.268 "name": "BaseBdev2", 00:18:17.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.268 "is_configured": false, 00:18:17.268 "data_offset": 0, 00:18:17.268 "data_size": 0 00:18:17.268 } 00:18:17.268 ] 00:18:17.268 }' 00:18:17.268 12:10:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:17.268 12:10:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.527 12:10:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:18:17.527 12:10:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.527 12:10:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.527 [2024-11-19 12:10:20.857842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:17.527 [2024-11-19 12:10:20.858135] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:17.527 [2024-11-19 12:10:20.858171] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:17.527 [2024-11-19 12:10:20.858260] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:17.527 [2024-11-19 12:10:20.858338] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:17.527 [2024-11-19 12:10:20.858367] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:17.527 [2024-11-19 12:10:20.858424] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:17.527 BaseBdev2 00:18:17.527 12:10:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.527 12:10:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:17.527 12:10:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:17.527 12:10:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:17.527 12:10:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:18:17.527 12:10:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:17.527 12:10:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:17.527 12:10:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:17.527 12:10:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.527 12:10:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.527 12:10:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.527 12:10:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:17.527 12:10:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.527 12:10:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.527 [ 00:18:17.527 { 00:18:17.527 "name": "BaseBdev2", 00:18:17.527 "aliases": [ 00:18:17.527 "4dd15277-9efe-4924-915d-e337740385aa" 00:18:17.527 ], 00:18:17.527 "product_name": "Malloc disk", 00:18:17.527 "block_size": 4128, 00:18:17.527 "num_blocks": 8192, 00:18:17.527 "uuid": "4dd15277-9efe-4924-915d-e337740385aa", 00:18:17.527 "md_size": 32, 00:18:17.527 "md_interleave": true, 00:18:17.527 "dif_type": 0, 00:18:17.527 "assigned_rate_limits": { 00:18:17.527 "rw_ios_per_sec": 0, 00:18:17.527 "rw_mbytes_per_sec": 0, 00:18:17.527 "r_mbytes_per_sec": 0, 00:18:17.528 "w_mbytes_per_sec": 0 00:18:17.528 }, 00:18:17.528 "claimed": true, 00:18:17.528 "claim_type": "exclusive_write", 00:18:17.528 "zoned": false, 00:18:17.528 "supported_io_types": { 00:18:17.528 "read": true, 00:18:17.528 "write": true, 00:18:17.528 "unmap": true, 00:18:17.528 "flush": true, 00:18:17.528 "reset": true, 00:18:17.528 "nvme_admin": false, 00:18:17.528 "nvme_io": false, 00:18:17.528 "nvme_io_md": false, 00:18:17.528 "write_zeroes": true, 00:18:17.528 "zcopy": true, 00:18:17.528 "get_zone_info": false, 00:18:17.528 "zone_management": false, 00:18:17.528 "zone_append": false, 00:18:17.528 "compare": false, 00:18:17.528 "compare_and_write": false, 00:18:17.528 "abort": true, 00:18:17.528 "seek_hole": false, 00:18:17.528 "seek_data": false, 00:18:17.528 "copy": true, 00:18:17.528 "nvme_iov_md": false 00:18:17.528 }, 00:18:17.528 "memory_domains": [ 00:18:17.528 { 00:18:17.528 "dma_device_id": "system", 00:18:17.528 "dma_device_type": 1 00:18:17.528 }, 00:18:17.528 { 00:18:17.528 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:17.528 "dma_device_type": 2 00:18:17.528 } 00:18:17.528 ], 00:18:17.528 "driver_specific": {} 00:18:17.528 } 00:18:17.528 ] 00:18:17.528 12:10:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.528 12:10:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:18:17.528 12:10:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:17.528 12:10:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:17.528 12:10:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:17.528 12:10:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:17.528 12:10:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:17.528 12:10:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:17.528 12:10:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:17.528 12:10:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:17.528 12:10:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:17.528 12:10:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:17.528 12:10:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:17.528 12:10:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:17.528 12:10:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.788 12:10:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.788 12:10:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.788 12:10:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:17.788 12:10:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.788 12:10:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:17.788 "name": "Existed_Raid", 00:18:17.788 "uuid": "20943e74-8ab4-4641-bcca-36375ebcfde9", 00:18:17.788 "strip_size_kb": 0, 00:18:17.788 "state": "online", 00:18:17.788 "raid_level": "raid1", 00:18:17.788 "superblock": true, 00:18:17.788 "num_base_bdevs": 2, 00:18:17.788 "num_base_bdevs_discovered": 2, 00:18:17.788 "num_base_bdevs_operational": 2, 00:18:17.788 "base_bdevs_list": [ 00:18:17.788 { 00:18:17.788 "name": "BaseBdev1", 00:18:17.788 "uuid": "92778848-86b7-4a40-ae15-1b83edccd9ed", 00:18:17.788 "is_configured": true, 00:18:17.788 "data_offset": 256, 00:18:17.788 "data_size": 7936 00:18:17.788 }, 00:18:17.788 { 00:18:17.788 "name": "BaseBdev2", 00:18:17.788 "uuid": "4dd15277-9efe-4924-915d-e337740385aa", 00:18:17.788 "is_configured": true, 00:18:17.788 "data_offset": 256, 00:18:17.788 "data_size": 7936 00:18:17.788 } 00:18:17.788 ] 00:18:17.788 }' 00:18:17.788 12:10:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:17.788 12:10:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:18.048 12:10:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:18.048 12:10:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:18.048 12:10:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:18.048 12:10:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:18.048 12:10:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:18.048 12:10:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:18.048 12:10:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:18.048 12:10:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:18.048 12:10:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.048 12:10:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:18.049 [2024-11-19 12:10:21.393228] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:18.049 12:10:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.309 12:10:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:18.309 "name": "Existed_Raid", 00:18:18.309 "aliases": [ 00:18:18.309 "20943e74-8ab4-4641-bcca-36375ebcfde9" 00:18:18.309 ], 00:18:18.309 "product_name": "Raid Volume", 00:18:18.309 "block_size": 4128, 00:18:18.309 "num_blocks": 7936, 00:18:18.309 "uuid": "20943e74-8ab4-4641-bcca-36375ebcfde9", 00:18:18.309 "md_size": 32, 00:18:18.309 "md_interleave": true, 00:18:18.309 "dif_type": 0, 00:18:18.309 "assigned_rate_limits": { 00:18:18.309 "rw_ios_per_sec": 0, 00:18:18.309 "rw_mbytes_per_sec": 0, 00:18:18.309 "r_mbytes_per_sec": 0, 00:18:18.309 "w_mbytes_per_sec": 0 00:18:18.309 }, 00:18:18.309 "claimed": false, 00:18:18.309 "zoned": false, 00:18:18.309 "supported_io_types": { 00:18:18.309 "read": true, 00:18:18.309 "write": true, 00:18:18.309 "unmap": false, 00:18:18.309 "flush": false, 00:18:18.309 "reset": true, 00:18:18.309 "nvme_admin": false, 00:18:18.309 "nvme_io": false, 00:18:18.309 "nvme_io_md": false, 00:18:18.309 "write_zeroes": true, 00:18:18.309 "zcopy": false, 00:18:18.309 "get_zone_info": false, 00:18:18.309 "zone_management": false, 00:18:18.309 "zone_append": false, 00:18:18.309 "compare": false, 00:18:18.309 "compare_and_write": false, 00:18:18.309 "abort": false, 00:18:18.309 "seek_hole": false, 00:18:18.309 "seek_data": false, 00:18:18.309 "copy": false, 00:18:18.309 "nvme_iov_md": false 00:18:18.309 }, 00:18:18.309 "memory_domains": [ 00:18:18.309 { 00:18:18.309 "dma_device_id": "system", 00:18:18.309 "dma_device_type": 1 00:18:18.309 }, 00:18:18.309 { 00:18:18.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:18.309 "dma_device_type": 2 00:18:18.309 }, 00:18:18.309 { 00:18:18.309 "dma_device_id": "system", 00:18:18.309 "dma_device_type": 1 00:18:18.309 }, 00:18:18.309 { 00:18:18.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:18.309 "dma_device_type": 2 00:18:18.309 } 00:18:18.309 ], 00:18:18.309 "driver_specific": { 00:18:18.309 "raid": { 00:18:18.309 "uuid": "20943e74-8ab4-4641-bcca-36375ebcfde9", 00:18:18.309 "strip_size_kb": 0, 00:18:18.309 "state": "online", 00:18:18.309 "raid_level": "raid1", 00:18:18.309 "superblock": true, 00:18:18.309 "num_base_bdevs": 2, 00:18:18.309 "num_base_bdevs_discovered": 2, 00:18:18.309 "num_base_bdevs_operational": 2, 00:18:18.309 "base_bdevs_list": [ 00:18:18.309 { 00:18:18.309 "name": "BaseBdev1", 00:18:18.309 "uuid": "92778848-86b7-4a40-ae15-1b83edccd9ed", 00:18:18.309 "is_configured": true, 00:18:18.309 "data_offset": 256, 00:18:18.309 "data_size": 7936 00:18:18.309 }, 00:18:18.309 { 00:18:18.309 "name": "BaseBdev2", 00:18:18.309 "uuid": "4dd15277-9efe-4924-915d-e337740385aa", 00:18:18.309 "is_configured": true, 00:18:18.309 "data_offset": 256, 00:18:18.309 "data_size": 7936 00:18:18.309 } 00:18:18.309 ] 00:18:18.309 } 00:18:18.309 } 00:18:18.309 }' 00:18:18.309 12:10:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:18.309 12:10:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:18.309 BaseBdev2' 00:18:18.310 12:10:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:18.310 12:10:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:18.310 12:10:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:18.310 12:10:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:18.310 12:10:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.310 12:10:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:18.310 12:10:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:18.310 12:10:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.310 12:10:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:18.310 12:10:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:18.310 12:10:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:18.310 12:10:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:18.310 12:10:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.310 12:10:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:18.310 12:10:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:18.310 12:10:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.310 12:10:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:18.310 12:10:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:18.310 12:10:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:18.310 12:10:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.310 12:10:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:18.310 [2024-11-19 12:10:21.612624] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:18.570 12:10:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.571 12:10:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:18.571 12:10:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:18.571 12:10:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:18.571 12:10:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:18:18.571 12:10:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:18.571 12:10:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:18.571 12:10:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:18.571 12:10:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:18.571 12:10:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:18.571 12:10:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:18.571 12:10:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:18.571 12:10:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:18.571 12:10:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:18.571 12:10:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:18.571 12:10:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:18.571 12:10:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.571 12:10:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.571 12:10:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:18.571 12:10:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:18.571 12:10:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.571 12:10:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:18.571 "name": "Existed_Raid", 00:18:18.571 "uuid": "20943e74-8ab4-4641-bcca-36375ebcfde9", 00:18:18.571 "strip_size_kb": 0, 00:18:18.571 "state": "online", 00:18:18.571 "raid_level": "raid1", 00:18:18.571 "superblock": true, 00:18:18.571 "num_base_bdevs": 2, 00:18:18.571 "num_base_bdevs_discovered": 1, 00:18:18.571 "num_base_bdevs_operational": 1, 00:18:18.571 "base_bdevs_list": [ 00:18:18.571 { 00:18:18.571 "name": null, 00:18:18.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.571 "is_configured": false, 00:18:18.571 "data_offset": 0, 00:18:18.571 "data_size": 7936 00:18:18.571 }, 00:18:18.571 { 00:18:18.571 "name": "BaseBdev2", 00:18:18.571 "uuid": "4dd15277-9efe-4924-915d-e337740385aa", 00:18:18.571 "is_configured": true, 00:18:18.571 "data_offset": 256, 00:18:18.571 "data_size": 7936 00:18:18.571 } 00:18:18.571 ] 00:18:18.571 }' 00:18:18.571 12:10:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:18.571 12:10:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:18.832 12:10:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:18.832 12:10:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:18.832 12:10:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:18.832 12:10:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.832 12:10:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.832 12:10:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:18.832 12:10:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.832 12:10:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:18.832 12:10:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:18.832 12:10:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:18.832 12:10:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.832 12:10:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:18.832 [2024-11-19 12:10:22.190233] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:18.832 [2024-11-19 12:10:22.190340] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:19.092 [2024-11-19 12:10:22.279618] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:19.093 [2024-11-19 12:10:22.279670] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:19.093 [2024-11-19 12:10:22.279681] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:19.093 12:10:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.093 12:10:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:19.093 12:10:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:19.093 12:10:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.093 12:10:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:19.093 12:10:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.093 12:10:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:19.093 12:10:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.093 12:10:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:19.093 12:10:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:19.093 12:10:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:18:19.093 12:10:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 88379 00:18:19.093 12:10:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88379 ']' 00:18:19.093 12:10:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88379 00:18:19.093 12:10:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:18:19.093 12:10:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:19.093 12:10:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88379 00:18:19.093 12:10:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:19.093 12:10:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:19.093 killing process with pid 88379 00:18:19.093 12:10:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88379' 00:18:19.093 12:10:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 88379 00:18:19.093 [2024-11-19 12:10:22.378298] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:19.093 12:10:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 88379 00:18:19.093 [2024-11-19 12:10:22.394236] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:20.475 12:10:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:18:20.475 00:18:20.475 real 0m5.008s 00:18:20.475 user 0m7.292s 00:18:20.475 sys 0m0.863s 00:18:20.475 12:10:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:20.475 12:10:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.475 ************************************ 00:18:20.475 END TEST raid_state_function_test_sb_md_interleaved 00:18:20.475 ************************************ 00:18:20.475 12:10:23 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:18:20.475 12:10:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:20.475 12:10:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:20.475 12:10:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:20.475 ************************************ 00:18:20.475 START TEST raid_superblock_test_md_interleaved 00:18:20.475 ************************************ 00:18:20.475 12:10:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:18:20.475 12:10:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:18:20.475 12:10:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:18:20.475 12:10:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:20.475 12:10:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:20.475 12:10:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:20.475 12:10:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:20.475 12:10:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:20.475 12:10:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:20.475 12:10:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:20.475 12:10:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:20.475 12:10:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:20.475 12:10:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:20.475 12:10:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:20.475 12:10:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:18:20.475 12:10:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:18:20.475 12:10:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=88627 00:18:20.475 12:10:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:20.475 12:10:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 88627 00:18:20.475 12:10:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88627 ']' 00:18:20.475 12:10:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:20.475 12:10:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:20.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:20.475 12:10:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:20.475 12:10:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:20.475 12:10:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.475 [2024-11-19 12:10:23.595221] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:18:20.475 [2024-11-19 12:10:23.595344] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88627 ] 00:18:20.475 [2024-11-19 12:10:23.769431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.735 [2024-11-19 12:10:23.874477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:20.735 [2024-11-19 12:10:24.065695] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:20.735 [2024-11-19 12:10:24.065732] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:21.306 12:10:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:21.306 12:10:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:18:21.306 12:10:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:21.306 12:10:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:21.306 12:10:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:21.306 12:10:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:21.306 12:10:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:21.306 12:10:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:21.306 12:10:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:21.306 12:10:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:21.306 12:10:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:18:21.306 12:10:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.306 12:10:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.306 malloc1 00:18:21.306 12:10:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.306 12:10:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:21.306 12:10:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.306 12:10:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.306 [2024-11-19 12:10:24.467365] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:21.306 [2024-11-19 12:10:24.467418] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:21.306 [2024-11-19 12:10:24.467436] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:21.306 [2024-11-19 12:10:24.467445] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:21.306 [2024-11-19 12:10:24.469252] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:21.306 [2024-11-19 12:10:24.469288] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:21.306 pt1 00:18:21.306 12:10:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.306 12:10:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:21.306 12:10:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:21.306 12:10:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:21.306 12:10:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:21.306 12:10:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:21.306 12:10:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:21.306 12:10:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:21.306 12:10:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:21.306 12:10:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:18:21.306 12:10:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.306 12:10:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.306 malloc2 00:18:21.306 12:10:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.306 12:10:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:21.306 12:10:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.306 12:10:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.306 [2024-11-19 12:10:24.526558] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:21.306 [2024-11-19 12:10:24.526607] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:21.306 [2024-11-19 12:10:24.526624] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:21.306 [2024-11-19 12:10:24.526632] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:21.306 [2024-11-19 12:10:24.528487] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:21.306 [2024-11-19 12:10:24.528522] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:21.306 pt2 00:18:21.306 12:10:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.306 12:10:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:21.306 12:10:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:21.306 12:10:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:18:21.306 12:10:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.306 12:10:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.306 [2024-11-19 12:10:24.538571] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:21.306 [2024-11-19 12:10:24.540398] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:21.306 [2024-11-19 12:10:24.540590] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:21.306 [2024-11-19 12:10:24.540603] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:21.306 [2024-11-19 12:10:24.540668] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:21.307 [2024-11-19 12:10:24.540748] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:21.307 [2024-11-19 12:10:24.540766] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:21.307 [2024-11-19 12:10:24.540848] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:21.307 12:10:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.307 12:10:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:21.307 12:10:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:21.307 12:10:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:21.307 12:10:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:21.307 12:10:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:21.307 12:10:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:21.307 12:10:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:21.307 12:10:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:21.307 12:10:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:21.307 12:10:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:21.307 12:10:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.307 12:10:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.307 12:10:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.307 12:10:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.307 12:10:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.307 12:10:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:21.307 "name": "raid_bdev1", 00:18:21.307 "uuid": "c9cd97ee-edbe-46fe-bf64-7dfa542bd116", 00:18:21.307 "strip_size_kb": 0, 00:18:21.307 "state": "online", 00:18:21.307 "raid_level": "raid1", 00:18:21.307 "superblock": true, 00:18:21.307 "num_base_bdevs": 2, 00:18:21.307 "num_base_bdevs_discovered": 2, 00:18:21.307 "num_base_bdevs_operational": 2, 00:18:21.307 "base_bdevs_list": [ 00:18:21.307 { 00:18:21.307 "name": "pt1", 00:18:21.307 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:21.307 "is_configured": true, 00:18:21.307 "data_offset": 256, 00:18:21.307 "data_size": 7936 00:18:21.307 }, 00:18:21.307 { 00:18:21.307 "name": "pt2", 00:18:21.307 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:21.307 "is_configured": true, 00:18:21.307 "data_offset": 256, 00:18:21.307 "data_size": 7936 00:18:21.307 } 00:18:21.307 ] 00:18:21.307 }' 00:18:21.307 12:10:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:21.307 12:10:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.878 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:21.878 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:21.878 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:21.878 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:21.878 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:21.878 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:21.878 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:21.878 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:21.878 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.878 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.878 [2024-11-19 12:10:25.037922] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:21.878 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.878 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:21.878 "name": "raid_bdev1", 00:18:21.878 "aliases": [ 00:18:21.878 "c9cd97ee-edbe-46fe-bf64-7dfa542bd116" 00:18:21.878 ], 00:18:21.878 "product_name": "Raid Volume", 00:18:21.878 "block_size": 4128, 00:18:21.878 "num_blocks": 7936, 00:18:21.878 "uuid": "c9cd97ee-edbe-46fe-bf64-7dfa542bd116", 00:18:21.878 "md_size": 32, 00:18:21.878 "md_interleave": true, 00:18:21.878 "dif_type": 0, 00:18:21.878 "assigned_rate_limits": { 00:18:21.878 "rw_ios_per_sec": 0, 00:18:21.878 "rw_mbytes_per_sec": 0, 00:18:21.878 "r_mbytes_per_sec": 0, 00:18:21.878 "w_mbytes_per_sec": 0 00:18:21.878 }, 00:18:21.878 "claimed": false, 00:18:21.878 "zoned": false, 00:18:21.878 "supported_io_types": { 00:18:21.878 "read": true, 00:18:21.878 "write": true, 00:18:21.878 "unmap": false, 00:18:21.878 "flush": false, 00:18:21.878 "reset": true, 00:18:21.878 "nvme_admin": false, 00:18:21.878 "nvme_io": false, 00:18:21.878 "nvme_io_md": false, 00:18:21.878 "write_zeroes": true, 00:18:21.878 "zcopy": false, 00:18:21.878 "get_zone_info": false, 00:18:21.878 "zone_management": false, 00:18:21.878 "zone_append": false, 00:18:21.878 "compare": false, 00:18:21.878 "compare_and_write": false, 00:18:21.878 "abort": false, 00:18:21.878 "seek_hole": false, 00:18:21.878 "seek_data": false, 00:18:21.878 "copy": false, 00:18:21.878 "nvme_iov_md": false 00:18:21.878 }, 00:18:21.878 "memory_domains": [ 00:18:21.878 { 00:18:21.878 "dma_device_id": "system", 00:18:21.878 "dma_device_type": 1 00:18:21.878 }, 00:18:21.878 { 00:18:21.878 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:21.878 "dma_device_type": 2 00:18:21.878 }, 00:18:21.878 { 00:18:21.878 "dma_device_id": "system", 00:18:21.878 "dma_device_type": 1 00:18:21.878 }, 00:18:21.878 { 00:18:21.878 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:21.878 "dma_device_type": 2 00:18:21.878 } 00:18:21.878 ], 00:18:21.878 "driver_specific": { 00:18:21.878 "raid": { 00:18:21.878 "uuid": "c9cd97ee-edbe-46fe-bf64-7dfa542bd116", 00:18:21.878 "strip_size_kb": 0, 00:18:21.878 "state": "online", 00:18:21.878 "raid_level": "raid1", 00:18:21.878 "superblock": true, 00:18:21.878 "num_base_bdevs": 2, 00:18:21.878 "num_base_bdevs_discovered": 2, 00:18:21.878 "num_base_bdevs_operational": 2, 00:18:21.878 "base_bdevs_list": [ 00:18:21.878 { 00:18:21.878 "name": "pt1", 00:18:21.878 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:21.878 "is_configured": true, 00:18:21.878 "data_offset": 256, 00:18:21.878 "data_size": 7936 00:18:21.878 }, 00:18:21.878 { 00:18:21.878 "name": "pt2", 00:18:21.878 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:21.878 "is_configured": true, 00:18:21.878 "data_offset": 256, 00:18:21.878 "data_size": 7936 00:18:21.878 } 00:18:21.878 ] 00:18:21.878 } 00:18:21.878 } 00:18:21.878 }' 00:18:21.878 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:21.879 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:21.879 pt2' 00:18:21.879 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:21.879 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:21.879 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:21.879 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:21.879 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.879 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.879 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:21.879 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.879 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:21.879 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:21.879 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:21.879 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:21.879 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:21.879 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.879 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.879 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.879 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:21.879 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:21.879 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:22.139 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:22.139 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.139 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.139 [2024-11-19 12:10:25.257518] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:22.139 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.139 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c9cd97ee-edbe-46fe-bf64-7dfa542bd116 00:18:22.139 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z c9cd97ee-edbe-46fe-bf64-7dfa542bd116 ']' 00:18:22.139 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:22.139 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.139 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.139 [2024-11-19 12:10:25.305188] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:22.139 [2024-11-19 12:10:25.305211] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:22.139 [2024-11-19 12:10:25.305285] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:22.139 [2024-11-19 12:10:25.305336] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:22.139 [2024-11-19 12:10:25.305347] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:22.139 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.139 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.139 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.139 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.139 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:22.139 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.139 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:22.139 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:22.139 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:22.139 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:22.139 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.139 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.139 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.139 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:22.139 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:22.139 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.139 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.139 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.139 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:22.139 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:22.139 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.139 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.139 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.139 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:22.139 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:22.139 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:18:22.139 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:22.139 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:22.139 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:22.139 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:22.139 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:22.140 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:22.140 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.140 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.140 [2024-11-19 12:10:25.444979] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:22.140 [2024-11-19 12:10:25.446747] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:22.140 [2024-11-19 12:10:25.446820] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:22.140 [2024-11-19 12:10:25.446867] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:22.140 [2024-11-19 12:10:25.446880] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:22.140 [2024-11-19 12:10:25.446890] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:22.140 request: 00:18:22.140 { 00:18:22.140 "name": "raid_bdev1", 00:18:22.140 "raid_level": "raid1", 00:18:22.140 "base_bdevs": [ 00:18:22.140 "malloc1", 00:18:22.140 "malloc2" 00:18:22.140 ], 00:18:22.140 "superblock": false, 00:18:22.140 "method": "bdev_raid_create", 00:18:22.140 "req_id": 1 00:18:22.140 } 00:18:22.140 Got JSON-RPC error response 00:18:22.140 response: 00:18:22.140 { 00:18:22.140 "code": -17, 00:18:22.140 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:22.140 } 00:18:22.140 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:22.140 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:18:22.140 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:22.140 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:22.140 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:22.140 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.140 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.140 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:22.140 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.140 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.140 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:22.140 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:22.140 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:22.140 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.140 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.400 [2024-11-19 12:10:25.512835] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:22.400 [2024-11-19 12:10:25.512881] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:22.400 [2024-11-19 12:10:25.512895] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:22.400 [2024-11-19 12:10:25.512905] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:22.400 [2024-11-19 12:10:25.514681] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:22.400 [2024-11-19 12:10:25.514717] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:22.400 [2024-11-19 12:10:25.514760] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:22.400 [2024-11-19 12:10:25.514821] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:22.400 pt1 00:18:22.400 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.400 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:22.400 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:22.400 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:22.400 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:22.400 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:22.400 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:22.400 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:22.400 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:22.400 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:22.400 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:22.400 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.400 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.400 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.400 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.400 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.400 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:22.400 "name": "raid_bdev1", 00:18:22.400 "uuid": "c9cd97ee-edbe-46fe-bf64-7dfa542bd116", 00:18:22.400 "strip_size_kb": 0, 00:18:22.400 "state": "configuring", 00:18:22.400 "raid_level": "raid1", 00:18:22.400 "superblock": true, 00:18:22.400 "num_base_bdevs": 2, 00:18:22.400 "num_base_bdevs_discovered": 1, 00:18:22.400 "num_base_bdevs_operational": 2, 00:18:22.400 "base_bdevs_list": [ 00:18:22.400 { 00:18:22.400 "name": "pt1", 00:18:22.400 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:22.400 "is_configured": true, 00:18:22.400 "data_offset": 256, 00:18:22.400 "data_size": 7936 00:18:22.400 }, 00:18:22.400 { 00:18:22.400 "name": null, 00:18:22.400 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:22.400 "is_configured": false, 00:18:22.400 "data_offset": 256, 00:18:22.400 "data_size": 7936 00:18:22.400 } 00:18:22.400 ] 00:18:22.400 }' 00:18:22.400 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:22.400 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.661 12:10:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:18:22.661 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:22.661 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:22.661 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:22.661 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.661 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.661 [2024-11-19 12:10:26.007962] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:22.661 [2024-11-19 12:10:26.008024] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:22.661 [2024-11-19 12:10:26.008042] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:22.661 [2024-11-19 12:10:26.008052] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:22.661 [2024-11-19 12:10:26.008177] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:22.661 [2024-11-19 12:10:26.008191] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:22.661 [2024-11-19 12:10:26.008229] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:22.661 [2024-11-19 12:10:26.008250] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:22.661 [2024-11-19 12:10:26.008325] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:22.661 [2024-11-19 12:10:26.008345] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:22.661 [2024-11-19 12:10:26.008412] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:22.661 [2024-11-19 12:10:26.008485] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:22.661 [2024-11-19 12:10:26.008494] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:22.661 [2024-11-19 12:10:26.008549] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:22.661 pt2 00:18:22.661 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.661 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:22.661 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:22.661 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:22.661 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:22.661 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:22.661 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:22.661 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:22.661 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:22.661 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:22.661 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:22.662 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:22.662 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:22.662 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.662 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.662 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.662 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.662 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.922 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:22.922 "name": "raid_bdev1", 00:18:22.922 "uuid": "c9cd97ee-edbe-46fe-bf64-7dfa542bd116", 00:18:22.922 "strip_size_kb": 0, 00:18:22.922 "state": "online", 00:18:22.922 "raid_level": "raid1", 00:18:22.922 "superblock": true, 00:18:22.922 "num_base_bdevs": 2, 00:18:22.922 "num_base_bdevs_discovered": 2, 00:18:22.922 "num_base_bdevs_operational": 2, 00:18:22.922 "base_bdevs_list": [ 00:18:22.922 { 00:18:22.922 "name": "pt1", 00:18:22.922 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:22.922 "is_configured": true, 00:18:22.922 "data_offset": 256, 00:18:22.922 "data_size": 7936 00:18:22.922 }, 00:18:22.922 { 00:18:22.922 "name": "pt2", 00:18:22.922 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:22.922 "is_configured": true, 00:18:22.922 "data_offset": 256, 00:18:22.922 "data_size": 7936 00:18:22.922 } 00:18:22.922 ] 00:18:22.922 }' 00:18:22.922 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:22.922 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:23.182 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:23.182 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:23.182 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:23.182 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:23.182 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:23.182 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:23.182 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:23.182 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.182 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:23.182 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:23.182 [2024-11-19 12:10:26.475410] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:23.182 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.182 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:23.182 "name": "raid_bdev1", 00:18:23.182 "aliases": [ 00:18:23.182 "c9cd97ee-edbe-46fe-bf64-7dfa542bd116" 00:18:23.182 ], 00:18:23.182 "product_name": "Raid Volume", 00:18:23.182 "block_size": 4128, 00:18:23.182 "num_blocks": 7936, 00:18:23.182 "uuid": "c9cd97ee-edbe-46fe-bf64-7dfa542bd116", 00:18:23.182 "md_size": 32, 00:18:23.182 "md_interleave": true, 00:18:23.182 "dif_type": 0, 00:18:23.182 "assigned_rate_limits": { 00:18:23.182 "rw_ios_per_sec": 0, 00:18:23.182 "rw_mbytes_per_sec": 0, 00:18:23.182 "r_mbytes_per_sec": 0, 00:18:23.182 "w_mbytes_per_sec": 0 00:18:23.182 }, 00:18:23.182 "claimed": false, 00:18:23.182 "zoned": false, 00:18:23.182 "supported_io_types": { 00:18:23.182 "read": true, 00:18:23.182 "write": true, 00:18:23.182 "unmap": false, 00:18:23.182 "flush": false, 00:18:23.182 "reset": true, 00:18:23.182 "nvme_admin": false, 00:18:23.182 "nvme_io": false, 00:18:23.182 "nvme_io_md": false, 00:18:23.182 "write_zeroes": true, 00:18:23.182 "zcopy": false, 00:18:23.182 "get_zone_info": false, 00:18:23.182 "zone_management": false, 00:18:23.182 "zone_append": false, 00:18:23.182 "compare": false, 00:18:23.182 "compare_and_write": false, 00:18:23.182 "abort": false, 00:18:23.182 "seek_hole": false, 00:18:23.182 "seek_data": false, 00:18:23.182 "copy": false, 00:18:23.182 "nvme_iov_md": false 00:18:23.182 }, 00:18:23.182 "memory_domains": [ 00:18:23.182 { 00:18:23.182 "dma_device_id": "system", 00:18:23.182 "dma_device_type": 1 00:18:23.182 }, 00:18:23.182 { 00:18:23.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:23.182 "dma_device_type": 2 00:18:23.182 }, 00:18:23.182 { 00:18:23.182 "dma_device_id": "system", 00:18:23.182 "dma_device_type": 1 00:18:23.182 }, 00:18:23.182 { 00:18:23.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:23.182 "dma_device_type": 2 00:18:23.182 } 00:18:23.182 ], 00:18:23.182 "driver_specific": { 00:18:23.182 "raid": { 00:18:23.182 "uuid": "c9cd97ee-edbe-46fe-bf64-7dfa542bd116", 00:18:23.182 "strip_size_kb": 0, 00:18:23.182 "state": "online", 00:18:23.182 "raid_level": "raid1", 00:18:23.182 "superblock": true, 00:18:23.182 "num_base_bdevs": 2, 00:18:23.182 "num_base_bdevs_discovered": 2, 00:18:23.182 "num_base_bdevs_operational": 2, 00:18:23.182 "base_bdevs_list": [ 00:18:23.182 { 00:18:23.182 "name": "pt1", 00:18:23.182 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:23.182 "is_configured": true, 00:18:23.182 "data_offset": 256, 00:18:23.182 "data_size": 7936 00:18:23.182 }, 00:18:23.182 { 00:18:23.182 "name": "pt2", 00:18:23.182 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:23.182 "is_configured": true, 00:18:23.182 "data_offset": 256, 00:18:23.182 "data_size": 7936 00:18:23.182 } 00:18:23.182 ] 00:18:23.182 } 00:18:23.182 } 00:18:23.182 }' 00:18:23.182 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:23.442 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:23.443 pt2' 00:18:23.443 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:23.443 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:23.443 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:23.443 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:23.443 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.443 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:23.443 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:23.443 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.443 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:23.443 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:23.443 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:23.443 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:23.443 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:23.443 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.443 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:23.443 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.443 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:23.443 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:23.443 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:23.443 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:23.443 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.443 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:23.443 [2024-11-19 12:10:26.699048] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:23.443 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.443 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' c9cd97ee-edbe-46fe-bf64-7dfa542bd116 '!=' c9cd97ee-edbe-46fe-bf64-7dfa542bd116 ']' 00:18:23.443 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:18:23.443 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:23.443 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:18:23.443 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:23.443 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.443 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:23.443 [2024-11-19 12:10:26.750738] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:23.443 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.443 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:23.443 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:23.443 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:23.443 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:23.443 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:23.443 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:23.443 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:23.443 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:23.443 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:23.443 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:23.443 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.443 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.443 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.443 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:23.443 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.443 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:23.443 "name": "raid_bdev1", 00:18:23.443 "uuid": "c9cd97ee-edbe-46fe-bf64-7dfa542bd116", 00:18:23.443 "strip_size_kb": 0, 00:18:23.443 "state": "online", 00:18:23.443 "raid_level": "raid1", 00:18:23.443 "superblock": true, 00:18:23.443 "num_base_bdevs": 2, 00:18:23.443 "num_base_bdevs_discovered": 1, 00:18:23.443 "num_base_bdevs_operational": 1, 00:18:23.443 "base_bdevs_list": [ 00:18:23.443 { 00:18:23.443 "name": null, 00:18:23.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.443 "is_configured": false, 00:18:23.443 "data_offset": 0, 00:18:23.443 "data_size": 7936 00:18:23.443 }, 00:18:23.443 { 00:18:23.443 "name": "pt2", 00:18:23.443 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:23.443 "is_configured": true, 00:18:23.443 "data_offset": 256, 00:18:23.443 "data_size": 7936 00:18:23.443 } 00:18:23.443 ] 00:18:23.443 }' 00:18:23.443 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:23.443 12:10:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:24.014 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:24.014 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.014 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:24.014 [2024-11-19 12:10:27.193913] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:24.014 [2024-11-19 12:10:27.193937] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:24.014 [2024-11-19 12:10:27.193991] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:24.014 [2024-11-19 12:10:27.194046] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:24.014 [2024-11-19 12:10:27.194056] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:24.014 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.014 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.014 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:24.014 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.014 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:24.014 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.014 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:24.014 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:24.014 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:24.014 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:24.014 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:24.014 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.014 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:24.014 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.014 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:24.014 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:24.014 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:24.014 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:24.014 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:18:24.014 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:24.014 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.014 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:24.014 [2024-11-19 12:10:27.269799] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:24.014 [2024-11-19 12:10:27.269847] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:24.014 [2024-11-19 12:10:27.269861] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:24.014 [2024-11-19 12:10:27.269871] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:24.014 [2024-11-19 12:10:27.271778] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:24.014 [2024-11-19 12:10:27.271816] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:24.014 [2024-11-19 12:10:27.271861] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:24.014 [2024-11-19 12:10:27.271912] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:24.014 [2024-11-19 12:10:27.271968] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:24.014 [2024-11-19 12:10:27.271981] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:24.014 [2024-11-19 12:10:27.272072] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:24.014 [2024-11-19 12:10:27.272133] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:24.014 [2024-11-19 12:10:27.272147] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:24.014 [2024-11-19 12:10:27.272203] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:24.014 pt2 00:18:24.014 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.014 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:24.014 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:24.014 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:24.014 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:24.014 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:24.014 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:24.014 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:24.014 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:24.014 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:24.014 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:24.014 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.014 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.014 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.014 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:24.014 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.014 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:24.014 "name": "raid_bdev1", 00:18:24.014 "uuid": "c9cd97ee-edbe-46fe-bf64-7dfa542bd116", 00:18:24.014 "strip_size_kb": 0, 00:18:24.014 "state": "online", 00:18:24.014 "raid_level": "raid1", 00:18:24.014 "superblock": true, 00:18:24.014 "num_base_bdevs": 2, 00:18:24.014 "num_base_bdevs_discovered": 1, 00:18:24.014 "num_base_bdevs_operational": 1, 00:18:24.014 "base_bdevs_list": [ 00:18:24.014 { 00:18:24.014 "name": null, 00:18:24.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.014 "is_configured": false, 00:18:24.014 "data_offset": 256, 00:18:24.014 "data_size": 7936 00:18:24.014 }, 00:18:24.014 { 00:18:24.014 "name": "pt2", 00:18:24.014 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:24.014 "is_configured": true, 00:18:24.014 "data_offset": 256, 00:18:24.014 "data_size": 7936 00:18:24.014 } 00:18:24.014 ] 00:18:24.014 }' 00:18:24.014 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:24.014 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:24.313 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:24.313 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.313 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:24.574 [2024-11-19 12:10:27.693081] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:24.574 [2024-11-19 12:10:27.693111] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:24.574 [2024-11-19 12:10:27.693178] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:24.574 [2024-11-19 12:10:27.693222] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:24.574 [2024-11-19 12:10:27.693231] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:24.574 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.574 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.574 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:24.574 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.574 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:24.574 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.574 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:24.574 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:24.574 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:18:24.574 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:24.574 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.574 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:24.574 [2024-11-19 12:10:27.756973] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:24.574 [2024-11-19 12:10:27.757042] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:24.574 [2024-11-19 12:10:27.757062] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:24.574 [2024-11-19 12:10:27.757070] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:24.574 [2024-11-19 12:10:27.758923] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:24.574 [2024-11-19 12:10:27.758957] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:24.574 [2024-11-19 12:10:27.759017] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:24.574 [2024-11-19 12:10:27.759061] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:24.574 [2024-11-19 12:10:27.759174] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:24.574 [2024-11-19 12:10:27.759184] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:24.574 [2024-11-19 12:10:27.759199] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:24.574 [2024-11-19 12:10:27.759252] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:24.574 [2024-11-19 12:10:27.759331] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:24.574 [2024-11-19 12:10:27.759342] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:24.574 [2024-11-19 12:10:27.759402] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:24.574 [2024-11-19 12:10:27.759461] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:24.574 [2024-11-19 12:10:27.759488] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:24.574 [2024-11-19 12:10:27.759554] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:24.574 pt1 00:18:24.574 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.574 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:18:24.574 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:24.574 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:24.574 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:24.574 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:24.574 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:24.574 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:24.574 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:24.574 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:24.574 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:24.574 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:24.574 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.574 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.574 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.574 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:24.574 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.575 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:24.575 "name": "raid_bdev1", 00:18:24.575 "uuid": "c9cd97ee-edbe-46fe-bf64-7dfa542bd116", 00:18:24.575 "strip_size_kb": 0, 00:18:24.575 "state": "online", 00:18:24.575 "raid_level": "raid1", 00:18:24.575 "superblock": true, 00:18:24.575 "num_base_bdevs": 2, 00:18:24.575 "num_base_bdevs_discovered": 1, 00:18:24.575 "num_base_bdevs_operational": 1, 00:18:24.575 "base_bdevs_list": [ 00:18:24.575 { 00:18:24.575 "name": null, 00:18:24.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.575 "is_configured": false, 00:18:24.575 "data_offset": 256, 00:18:24.575 "data_size": 7936 00:18:24.575 }, 00:18:24.575 { 00:18:24.575 "name": "pt2", 00:18:24.575 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:24.575 "is_configured": true, 00:18:24.575 "data_offset": 256, 00:18:24.575 "data_size": 7936 00:18:24.575 } 00:18:24.575 ] 00:18:24.575 }' 00:18:24.575 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:24.575 12:10:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:24.835 12:10:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:24.835 12:10:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:24.835 12:10:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.835 12:10:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:25.096 12:10:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.096 12:10:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:25.096 12:10:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:25.096 12:10:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:25.096 12:10:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.096 12:10:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:25.096 [2024-11-19 12:10:28.260290] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:25.096 12:10:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.096 12:10:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' c9cd97ee-edbe-46fe-bf64-7dfa542bd116 '!=' c9cd97ee-edbe-46fe-bf64-7dfa542bd116 ']' 00:18:25.096 12:10:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 88627 00:18:25.096 12:10:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88627 ']' 00:18:25.096 12:10:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88627 00:18:25.096 12:10:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:18:25.096 12:10:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:25.096 12:10:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88627 00:18:25.096 12:10:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:25.096 12:10:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:25.096 killing process with pid 88627 00:18:25.096 12:10:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88627' 00:18:25.096 12:10:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 88627 00:18:25.096 [2024-11-19 12:10:28.328142] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:25.096 [2024-11-19 12:10:28.328209] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:25.096 [2024-11-19 12:10:28.328256] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:25.096 [2024-11-19 12:10:28.328272] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:25.096 12:10:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 88627 00:18:25.357 [2024-11-19 12:10:28.524295] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:26.298 12:10:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:18:26.298 00:18:26.298 real 0m6.059s 00:18:26.298 user 0m9.227s 00:18:26.298 sys 0m1.146s 00:18:26.298 12:10:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:26.298 ************************************ 00:18:26.298 END TEST raid_superblock_test_md_interleaved 00:18:26.298 ************************************ 00:18:26.298 12:10:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.298 12:10:29 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:18:26.298 12:10:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:26.298 12:10:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:26.298 12:10:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:26.298 ************************************ 00:18:26.298 START TEST raid_rebuild_test_sb_md_interleaved 00:18:26.298 ************************************ 00:18:26.298 12:10:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:18:26.298 12:10:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:26.298 12:10:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:26.298 12:10:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:26.298 12:10:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:26.298 12:10:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:18:26.298 12:10:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:26.298 12:10:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:26.298 12:10:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:26.298 12:10:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:26.298 12:10:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:26.298 12:10:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:26.298 12:10:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:26.298 12:10:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:26.298 12:10:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:26.298 12:10:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:26.298 12:10:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:26.298 12:10:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:26.298 12:10:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:26.298 12:10:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:26.298 12:10:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:26.298 12:10:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:26.298 12:10:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:26.298 12:10:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:26.298 12:10:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:26.298 12:10:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=88950 00:18:26.298 12:10:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:26.298 12:10:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 88950 00:18:26.298 12:10:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88950 ']' 00:18:26.298 12:10:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:26.298 12:10:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:26.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:26.298 12:10:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:26.298 12:10:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:26.298 12:10:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.558 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:26.558 Zero copy mechanism will not be used. 00:18:26.558 [2024-11-19 12:10:29.733775] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:18:26.559 [2024-11-19 12:10:29.733888] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88950 ] 00:18:26.559 [2024-11-19 12:10:29.907293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.823 [2024-11-19 12:10:30.011022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:27.082 [2024-11-19 12:10:30.210602] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:27.082 [2024-11-19 12:10:30.210661] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:27.343 12:10:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:27.343 12:10:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:18:27.343 12:10:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:27.343 12:10:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:18:27.343 12:10:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.343 12:10:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.343 BaseBdev1_malloc 00:18:27.343 12:10:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.343 12:10:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:27.343 12:10:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.343 12:10:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.343 [2024-11-19 12:10:30.591355] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:27.343 [2024-11-19 12:10:30.591412] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:27.343 [2024-11-19 12:10:30.591433] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:27.343 [2024-11-19 12:10:30.591445] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:27.343 [2024-11-19 12:10:30.593253] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:27.343 [2024-11-19 12:10:30.593291] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:27.343 BaseBdev1 00:18:27.343 12:10:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.343 12:10:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:27.343 12:10:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:18:27.343 12:10:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.343 12:10:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.343 BaseBdev2_malloc 00:18:27.343 12:10:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.343 12:10:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:27.343 12:10:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.343 12:10:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.343 [2024-11-19 12:10:30.640395] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:27.343 [2024-11-19 12:10:30.640457] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:27.343 [2024-11-19 12:10:30.640488] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:27.343 [2024-11-19 12:10:30.640500] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:27.343 [2024-11-19 12:10:30.642233] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:27.343 [2024-11-19 12:10:30.642267] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:27.343 BaseBdev2 00:18:27.343 12:10:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.343 12:10:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:18:27.343 12:10:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.343 12:10:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.604 spare_malloc 00:18:27.604 12:10:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.604 12:10:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:27.604 12:10:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.604 12:10:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.604 spare_delay 00:18:27.604 12:10:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.604 12:10:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:27.604 12:10:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.604 12:10:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.604 [2024-11-19 12:10:30.738091] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:27.604 [2024-11-19 12:10:30.738143] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:27.604 [2024-11-19 12:10:30.738161] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:27.604 [2024-11-19 12:10:30.738172] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:27.604 [2024-11-19 12:10:30.739938] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:27.604 [2024-11-19 12:10:30.739978] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:27.604 spare 00:18:27.604 12:10:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.604 12:10:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:27.604 12:10:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.604 12:10:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.604 [2024-11-19 12:10:30.750115] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:27.604 [2024-11-19 12:10:30.751821] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:27.604 [2024-11-19 12:10:30.752019] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:27.604 [2024-11-19 12:10:30.752034] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:27.604 [2024-11-19 12:10:30.752108] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:27.604 [2024-11-19 12:10:30.752177] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:27.604 [2024-11-19 12:10:30.752192] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:27.604 [2024-11-19 12:10:30.752257] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:27.604 12:10:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.604 12:10:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:27.604 12:10:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:27.604 12:10:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:27.604 12:10:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:27.604 12:10:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:27.604 12:10:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:27.604 12:10:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:27.604 12:10:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:27.604 12:10:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:27.604 12:10:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:27.604 12:10:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.604 12:10:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.604 12:10:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.604 12:10:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.604 12:10:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.604 12:10:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:27.604 "name": "raid_bdev1", 00:18:27.604 "uuid": "b1f04349-7a79-4f47-888c-f5f190785fe6", 00:18:27.604 "strip_size_kb": 0, 00:18:27.604 "state": "online", 00:18:27.604 "raid_level": "raid1", 00:18:27.604 "superblock": true, 00:18:27.604 "num_base_bdevs": 2, 00:18:27.604 "num_base_bdevs_discovered": 2, 00:18:27.604 "num_base_bdevs_operational": 2, 00:18:27.604 "base_bdevs_list": [ 00:18:27.604 { 00:18:27.604 "name": "BaseBdev1", 00:18:27.604 "uuid": "de7c17e8-ed5f-5fdc-b80c-519f00d8d0e3", 00:18:27.604 "is_configured": true, 00:18:27.604 "data_offset": 256, 00:18:27.604 "data_size": 7936 00:18:27.604 }, 00:18:27.604 { 00:18:27.604 "name": "BaseBdev2", 00:18:27.604 "uuid": "262f0d79-a942-5316-8c07-d09d662f0d24", 00:18:27.604 "is_configured": true, 00:18:27.604 "data_offset": 256, 00:18:27.604 "data_size": 7936 00:18:27.604 } 00:18:27.604 ] 00:18:27.604 }' 00:18:27.604 12:10:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:27.604 12:10:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.865 12:10:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:27.865 12:10:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:27.865 12:10:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.865 12:10:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.865 [2024-11-19 12:10:31.197569] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:27.865 12:10:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.865 12:10:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:18:28.125 12:10:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.126 12:10:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:28.126 12:10:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.126 12:10:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.126 12:10:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.126 12:10:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:18:28.126 12:10:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:28.126 12:10:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:18:28.126 12:10:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:28.126 12:10:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.126 12:10:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.126 [2024-11-19 12:10:31.289132] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:28.126 12:10:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.126 12:10:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:28.126 12:10:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:28.126 12:10:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:28.126 12:10:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:28.126 12:10:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:28.126 12:10:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:28.126 12:10:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:28.126 12:10:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:28.126 12:10:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:28.126 12:10:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:28.126 12:10:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.126 12:10:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.126 12:10:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.126 12:10:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.126 12:10:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.126 12:10:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:28.126 "name": "raid_bdev1", 00:18:28.126 "uuid": "b1f04349-7a79-4f47-888c-f5f190785fe6", 00:18:28.126 "strip_size_kb": 0, 00:18:28.126 "state": "online", 00:18:28.126 "raid_level": "raid1", 00:18:28.126 "superblock": true, 00:18:28.126 "num_base_bdevs": 2, 00:18:28.126 "num_base_bdevs_discovered": 1, 00:18:28.126 "num_base_bdevs_operational": 1, 00:18:28.126 "base_bdevs_list": [ 00:18:28.126 { 00:18:28.126 "name": null, 00:18:28.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.126 "is_configured": false, 00:18:28.126 "data_offset": 0, 00:18:28.126 "data_size": 7936 00:18:28.126 }, 00:18:28.126 { 00:18:28.126 "name": "BaseBdev2", 00:18:28.126 "uuid": "262f0d79-a942-5316-8c07-d09d662f0d24", 00:18:28.126 "is_configured": true, 00:18:28.126 "data_offset": 256, 00:18:28.126 "data_size": 7936 00:18:28.126 } 00:18:28.126 ] 00:18:28.126 }' 00:18:28.126 12:10:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:28.126 12:10:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.387 12:10:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:28.387 12:10:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.387 12:10:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.387 [2024-11-19 12:10:31.704461] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:28.387 [2024-11-19 12:10:31.721797] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:28.387 12:10:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.387 12:10:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:28.387 [2024-11-19 12:10:31.723638] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:29.772 12:10:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:29.772 12:10:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:29.772 12:10:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:29.772 12:10:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:29.772 12:10:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:29.772 12:10:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.772 12:10:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:29.772 12:10:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.772 12:10:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:29.772 12:10:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.772 12:10:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:29.772 "name": "raid_bdev1", 00:18:29.772 "uuid": "b1f04349-7a79-4f47-888c-f5f190785fe6", 00:18:29.772 "strip_size_kb": 0, 00:18:29.772 "state": "online", 00:18:29.772 "raid_level": "raid1", 00:18:29.772 "superblock": true, 00:18:29.772 "num_base_bdevs": 2, 00:18:29.772 "num_base_bdevs_discovered": 2, 00:18:29.772 "num_base_bdevs_operational": 2, 00:18:29.772 "process": { 00:18:29.772 "type": "rebuild", 00:18:29.772 "target": "spare", 00:18:29.772 "progress": { 00:18:29.772 "blocks": 2560, 00:18:29.772 "percent": 32 00:18:29.772 } 00:18:29.772 }, 00:18:29.772 "base_bdevs_list": [ 00:18:29.772 { 00:18:29.772 "name": "spare", 00:18:29.772 "uuid": "1aaeb342-8c71-541d-bbcd-ddeba6cc615d", 00:18:29.772 "is_configured": true, 00:18:29.772 "data_offset": 256, 00:18:29.772 "data_size": 7936 00:18:29.772 }, 00:18:29.772 { 00:18:29.772 "name": "BaseBdev2", 00:18:29.772 "uuid": "262f0d79-a942-5316-8c07-d09d662f0d24", 00:18:29.772 "is_configured": true, 00:18:29.772 "data_offset": 256, 00:18:29.772 "data_size": 7936 00:18:29.772 } 00:18:29.772 ] 00:18:29.772 }' 00:18:29.772 12:10:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:29.772 12:10:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:29.772 12:10:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:29.772 12:10:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:29.772 12:10:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:29.772 12:10:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.772 12:10:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:29.772 [2024-11-19 12:10:32.883396] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:29.772 [2024-11-19 12:10:32.928385] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:29.772 [2024-11-19 12:10:32.928440] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:29.772 [2024-11-19 12:10:32.928454] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:29.772 [2024-11-19 12:10:32.928466] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:29.772 12:10:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.772 12:10:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:29.772 12:10:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:29.772 12:10:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:29.772 12:10:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:29.772 12:10:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:29.772 12:10:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:29.772 12:10:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:29.772 12:10:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:29.772 12:10:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:29.772 12:10:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:29.772 12:10:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.772 12:10:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:29.772 12:10:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.772 12:10:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:29.772 12:10:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.772 12:10:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:29.772 "name": "raid_bdev1", 00:18:29.772 "uuid": "b1f04349-7a79-4f47-888c-f5f190785fe6", 00:18:29.772 "strip_size_kb": 0, 00:18:29.772 "state": "online", 00:18:29.772 "raid_level": "raid1", 00:18:29.772 "superblock": true, 00:18:29.772 "num_base_bdevs": 2, 00:18:29.772 "num_base_bdevs_discovered": 1, 00:18:29.772 "num_base_bdevs_operational": 1, 00:18:29.772 "base_bdevs_list": [ 00:18:29.772 { 00:18:29.772 "name": null, 00:18:29.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.773 "is_configured": false, 00:18:29.773 "data_offset": 0, 00:18:29.773 "data_size": 7936 00:18:29.773 }, 00:18:29.773 { 00:18:29.773 "name": "BaseBdev2", 00:18:29.773 "uuid": "262f0d79-a942-5316-8c07-d09d662f0d24", 00:18:29.773 "is_configured": true, 00:18:29.773 "data_offset": 256, 00:18:29.773 "data_size": 7936 00:18:29.773 } 00:18:29.773 ] 00:18:29.773 }' 00:18:29.773 12:10:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:29.773 12:10:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:30.033 12:10:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:30.033 12:10:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:30.033 12:10:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:30.033 12:10:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:30.033 12:10:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:30.033 12:10:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.033 12:10:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.033 12:10:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.033 12:10:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:30.033 12:10:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.033 12:10:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:30.033 "name": "raid_bdev1", 00:18:30.033 "uuid": "b1f04349-7a79-4f47-888c-f5f190785fe6", 00:18:30.033 "strip_size_kb": 0, 00:18:30.033 "state": "online", 00:18:30.033 "raid_level": "raid1", 00:18:30.033 "superblock": true, 00:18:30.033 "num_base_bdevs": 2, 00:18:30.033 "num_base_bdevs_discovered": 1, 00:18:30.033 "num_base_bdevs_operational": 1, 00:18:30.033 "base_bdevs_list": [ 00:18:30.033 { 00:18:30.033 "name": null, 00:18:30.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.033 "is_configured": false, 00:18:30.033 "data_offset": 0, 00:18:30.033 "data_size": 7936 00:18:30.033 }, 00:18:30.033 { 00:18:30.033 "name": "BaseBdev2", 00:18:30.033 "uuid": "262f0d79-a942-5316-8c07-d09d662f0d24", 00:18:30.033 "is_configured": true, 00:18:30.033 "data_offset": 256, 00:18:30.033 "data_size": 7936 00:18:30.033 } 00:18:30.033 ] 00:18:30.033 }' 00:18:30.033 12:10:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:30.294 12:10:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:30.294 12:10:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:30.294 12:10:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:30.294 12:10:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:30.294 12:10:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.294 12:10:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:30.294 [2024-11-19 12:10:33.461424] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:30.294 [2024-11-19 12:10:33.477290] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:30.294 12:10:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.294 12:10:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:30.294 [2024-11-19 12:10:33.479127] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:31.235 12:10:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:31.235 12:10:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:31.236 12:10:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:31.236 12:10:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:31.236 12:10:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:31.236 12:10:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.236 12:10:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.236 12:10:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.236 12:10:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.236 12:10:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.236 12:10:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:31.236 "name": "raid_bdev1", 00:18:31.236 "uuid": "b1f04349-7a79-4f47-888c-f5f190785fe6", 00:18:31.236 "strip_size_kb": 0, 00:18:31.236 "state": "online", 00:18:31.236 "raid_level": "raid1", 00:18:31.236 "superblock": true, 00:18:31.236 "num_base_bdevs": 2, 00:18:31.236 "num_base_bdevs_discovered": 2, 00:18:31.236 "num_base_bdevs_operational": 2, 00:18:31.236 "process": { 00:18:31.236 "type": "rebuild", 00:18:31.236 "target": "spare", 00:18:31.236 "progress": { 00:18:31.236 "blocks": 2560, 00:18:31.236 "percent": 32 00:18:31.236 } 00:18:31.236 }, 00:18:31.236 "base_bdevs_list": [ 00:18:31.236 { 00:18:31.236 "name": "spare", 00:18:31.236 "uuid": "1aaeb342-8c71-541d-bbcd-ddeba6cc615d", 00:18:31.236 "is_configured": true, 00:18:31.236 "data_offset": 256, 00:18:31.236 "data_size": 7936 00:18:31.236 }, 00:18:31.236 { 00:18:31.236 "name": "BaseBdev2", 00:18:31.236 "uuid": "262f0d79-a942-5316-8c07-d09d662f0d24", 00:18:31.236 "is_configured": true, 00:18:31.236 "data_offset": 256, 00:18:31.236 "data_size": 7936 00:18:31.236 } 00:18:31.236 ] 00:18:31.236 }' 00:18:31.236 12:10:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:31.236 12:10:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:31.236 12:10:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:31.497 12:10:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:31.497 12:10:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:31.497 12:10:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:31.497 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:31.497 12:10:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:31.497 12:10:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:31.497 12:10:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:31.497 12:10:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=727 00:18:31.497 12:10:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:31.497 12:10:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:31.497 12:10:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:31.497 12:10:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:31.497 12:10:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:31.497 12:10:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:31.497 12:10:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.497 12:10:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.497 12:10:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.497 12:10:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.497 12:10:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.497 12:10:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:31.497 "name": "raid_bdev1", 00:18:31.497 "uuid": "b1f04349-7a79-4f47-888c-f5f190785fe6", 00:18:31.497 "strip_size_kb": 0, 00:18:31.497 "state": "online", 00:18:31.497 "raid_level": "raid1", 00:18:31.497 "superblock": true, 00:18:31.497 "num_base_bdevs": 2, 00:18:31.497 "num_base_bdevs_discovered": 2, 00:18:31.497 "num_base_bdevs_operational": 2, 00:18:31.497 "process": { 00:18:31.497 "type": "rebuild", 00:18:31.497 "target": "spare", 00:18:31.497 "progress": { 00:18:31.497 "blocks": 2816, 00:18:31.497 "percent": 35 00:18:31.497 } 00:18:31.497 }, 00:18:31.497 "base_bdevs_list": [ 00:18:31.497 { 00:18:31.497 "name": "spare", 00:18:31.497 "uuid": "1aaeb342-8c71-541d-bbcd-ddeba6cc615d", 00:18:31.497 "is_configured": true, 00:18:31.497 "data_offset": 256, 00:18:31.497 "data_size": 7936 00:18:31.497 }, 00:18:31.497 { 00:18:31.497 "name": "BaseBdev2", 00:18:31.497 "uuid": "262f0d79-a942-5316-8c07-d09d662f0d24", 00:18:31.497 "is_configured": true, 00:18:31.497 "data_offset": 256, 00:18:31.497 "data_size": 7936 00:18:31.497 } 00:18:31.497 ] 00:18:31.497 }' 00:18:31.497 12:10:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:31.497 12:10:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:31.497 12:10:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:31.497 12:10:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:31.497 12:10:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:32.438 12:10:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:32.438 12:10:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:32.438 12:10:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:32.438 12:10:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:32.438 12:10:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:32.438 12:10:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:32.438 12:10:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.438 12:10:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.438 12:10:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.438 12:10:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.438 12:10:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.438 12:10:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:32.438 "name": "raid_bdev1", 00:18:32.438 "uuid": "b1f04349-7a79-4f47-888c-f5f190785fe6", 00:18:32.438 "strip_size_kb": 0, 00:18:32.438 "state": "online", 00:18:32.438 "raid_level": "raid1", 00:18:32.438 "superblock": true, 00:18:32.438 "num_base_bdevs": 2, 00:18:32.438 "num_base_bdevs_discovered": 2, 00:18:32.438 "num_base_bdevs_operational": 2, 00:18:32.438 "process": { 00:18:32.438 "type": "rebuild", 00:18:32.438 "target": "spare", 00:18:32.438 "progress": { 00:18:32.438 "blocks": 5632, 00:18:32.438 "percent": 70 00:18:32.438 } 00:18:32.438 }, 00:18:32.438 "base_bdevs_list": [ 00:18:32.438 { 00:18:32.438 "name": "spare", 00:18:32.438 "uuid": "1aaeb342-8c71-541d-bbcd-ddeba6cc615d", 00:18:32.438 "is_configured": true, 00:18:32.438 "data_offset": 256, 00:18:32.438 "data_size": 7936 00:18:32.438 }, 00:18:32.438 { 00:18:32.438 "name": "BaseBdev2", 00:18:32.438 "uuid": "262f0d79-a942-5316-8c07-d09d662f0d24", 00:18:32.438 "is_configured": true, 00:18:32.438 "data_offset": 256, 00:18:32.438 "data_size": 7936 00:18:32.438 } 00:18:32.438 ] 00:18:32.438 }' 00:18:32.698 12:10:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:32.698 12:10:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:32.698 12:10:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:32.698 12:10:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:32.698 12:10:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:33.269 [2024-11-19 12:10:36.590960] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:33.269 [2024-11-19 12:10:36.591102] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:33.269 [2024-11-19 12:10:36.591242] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:33.840 12:10:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:33.840 12:10:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:33.840 12:10:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:33.840 12:10:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:33.840 12:10:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:33.840 12:10:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:33.840 12:10:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.840 12:10:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.840 12:10:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.840 12:10:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:33.840 12:10:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.840 12:10:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:33.840 "name": "raid_bdev1", 00:18:33.840 "uuid": "b1f04349-7a79-4f47-888c-f5f190785fe6", 00:18:33.840 "strip_size_kb": 0, 00:18:33.840 "state": "online", 00:18:33.840 "raid_level": "raid1", 00:18:33.840 "superblock": true, 00:18:33.840 "num_base_bdevs": 2, 00:18:33.840 "num_base_bdevs_discovered": 2, 00:18:33.840 "num_base_bdevs_operational": 2, 00:18:33.840 "base_bdevs_list": [ 00:18:33.840 { 00:18:33.840 "name": "spare", 00:18:33.840 "uuid": "1aaeb342-8c71-541d-bbcd-ddeba6cc615d", 00:18:33.840 "is_configured": true, 00:18:33.840 "data_offset": 256, 00:18:33.840 "data_size": 7936 00:18:33.840 }, 00:18:33.840 { 00:18:33.840 "name": "BaseBdev2", 00:18:33.840 "uuid": "262f0d79-a942-5316-8c07-d09d662f0d24", 00:18:33.840 "is_configured": true, 00:18:33.840 "data_offset": 256, 00:18:33.840 "data_size": 7936 00:18:33.840 } 00:18:33.840 ] 00:18:33.840 }' 00:18:33.840 12:10:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:33.840 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:33.840 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:33.840 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:33.840 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:18:33.841 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:33.841 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:33.841 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:33.841 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:33.841 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:33.841 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.841 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.841 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.841 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:33.841 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.841 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:33.841 "name": "raid_bdev1", 00:18:33.841 "uuid": "b1f04349-7a79-4f47-888c-f5f190785fe6", 00:18:33.841 "strip_size_kb": 0, 00:18:33.841 "state": "online", 00:18:33.841 "raid_level": "raid1", 00:18:33.841 "superblock": true, 00:18:33.841 "num_base_bdevs": 2, 00:18:33.841 "num_base_bdevs_discovered": 2, 00:18:33.841 "num_base_bdevs_operational": 2, 00:18:33.841 "base_bdevs_list": [ 00:18:33.841 { 00:18:33.841 "name": "spare", 00:18:33.841 "uuid": "1aaeb342-8c71-541d-bbcd-ddeba6cc615d", 00:18:33.841 "is_configured": true, 00:18:33.841 "data_offset": 256, 00:18:33.841 "data_size": 7936 00:18:33.841 }, 00:18:33.841 { 00:18:33.841 "name": "BaseBdev2", 00:18:33.841 "uuid": "262f0d79-a942-5316-8c07-d09d662f0d24", 00:18:33.841 "is_configured": true, 00:18:33.841 "data_offset": 256, 00:18:33.841 "data_size": 7936 00:18:33.841 } 00:18:33.841 ] 00:18:33.841 }' 00:18:33.841 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:33.841 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:33.841 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:33.841 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:33.841 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:33.841 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:33.841 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:33.841 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:33.841 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:33.841 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:33.841 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:33.841 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:33.841 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:33.841 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:33.841 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.841 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.841 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.841 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:33.841 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.101 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:34.101 "name": "raid_bdev1", 00:18:34.101 "uuid": "b1f04349-7a79-4f47-888c-f5f190785fe6", 00:18:34.101 "strip_size_kb": 0, 00:18:34.101 "state": "online", 00:18:34.101 "raid_level": "raid1", 00:18:34.101 "superblock": true, 00:18:34.101 "num_base_bdevs": 2, 00:18:34.101 "num_base_bdevs_discovered": 2, 00:18:34.101 "num_base_bdevs_operational": 2, 00:18:34.101 "base_bdevs_list": [ 00:18:34.101 { 00:18:34.101 "name": "spare", 00:18:34.101 "uuid": "1aaeb342-8c71-541d-bbcd-ddeba6cc615d", 00:18:34.101 "is_configured": true, 00:18:34.101 "data_offset": 256, 00:18:34.101 "data_size": 7936 00:18:34.101 }, 00:18:34.101 { 00:18:34.101 "name": "BaseBdev2", 00:18:34.101 "uuid": "262f0d79-a942-5316-8c07-d09d662f0d24", 00:18:34.101 "is_configured": true, 00:18:34.101 "data_offset": 256, 00:18:34.101 "data_size": 7936 00:18:34.101 } 00:18:34.101 ] 00:18:34.101 }' 00:18:34.101 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:34.101 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.361 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:34.361 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.361 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.361 [2024-11-19 12:10:37.614188] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:34.361 [2024-11-19 12:10:37.614221] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:34.361 [2024-11-19 12:10:37.614305] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:34.361 [2024-11-19 12:10:37.614383] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:34.361 [2024-11-19 12:10:37.614399] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:34.361 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.361 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.361 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:18:34.361 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.361 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.361 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.361 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:34.361 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:18:34.361 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:34.361 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:34.361 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.361 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.361 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.361 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:34.361 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.361 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.361 [2024-11-19 12:10:37.686076] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:34.361 [2024-11-19 12:10:37.686127] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:34.361 [2024-11-19 12:10:37.686146] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:18:34.361 [2024-11-19 12:10:37.686154] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:34.361 [2024-11-19 12:10:37.688088] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:34.361 [2024-11-19 12:10:37.688125] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:34.362 [2024-11-19 12:10:37.688177] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:34.362 [2024-11-19 12:10:37.688238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:34.362 [2024-11-19 12:10:37.688340] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:34.362 spare 00:18:34.362 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.362 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:34.362 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.362 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.622 [2024-11-19 12:10:37.788232] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:34.622 [2024-11-19 12:10:37.788264] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:34.622 [2024-11-19 12:10:37.788362] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:34.622 [2024-11-19 12:10:37.788434] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:34.622 [2024-11-19 12:10:37.788442] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:34.622 [2024-11-19 12:10:37.788526] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:34.622 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.622 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:34.622 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:34.622 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:34.622 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:34.622 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:34.622 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:34.622 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:34.622 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:34.622 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:34.622 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:34.622 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.622 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.622 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.622 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.622 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.622 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:34.622 "name": "raid_bdev1", 00:18:34.622 "uuid": "b1f04349-7a79-4f47-888c-f5f190785fe6", 00:18:34.622 "strip_size_kb": 0, 00:18:34.622 "state": "online", 00:18:34.622 "raid_level": "raid1", 00:18:34.622 "superblock": true, 00:18:34.622 "num_base_bdevs": 2, 00:18:34.622 "num_base_bdevs_discovered": 2, 00:18:34.622 "num_base_bdevs_operational": 2, 00:18:34.622 "base_bdevs_list": [ 00:18:34.622 { 00:18:34.622 "name": "spare", 00:18:34.622 "uuid": "1aaeb342-8c71-541d-bbcd-ddeba6cc615d", 00:18:34.622 "is_configured": true, 00:18:34.622 "data_offset": 256, 00:18:34.622 "data_size": 7936 00:18:34.622 }, 00:18:34.622 { 00:18:34.622 "name": "BaseBdev2", 00:18:34.622 "uuid": "262f0d79-a942-5316-8c07-d09d662f0d24", 00:18:34.622 "is_configured": true, 00:18:34.622 "data_offset": 256, 00:18:34.622 "data_size": 7936 00:18:34.622 } 00:18:34.622 ] 00:18:34.622 }' 00:18:34.622 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:34.622 12:10:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.883 12:10:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:34.883 12:10:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:34.883 12:10:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:34.883 12:10:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:34.883 12:10:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:34.883 12:10:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.883 12:10:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.883 12:10:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.883 12:10:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.883 12:10:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.143 12:10:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:35.143 "name": "raid_bdev1", 00:18:35.143 "uuid": "b1f04349-7a79-4f47-888c-f5f190785fe6", 00:18:35.143 "strip_size_kb": 0, 00:18:35.143 "state": "online", 00:18:35.143 "raid_level": "raid1", 00:18:35.143 "superblock": true, 00:18:35.143 "num_base_bdevs": 2, 00:18:35.143 "num_base_bdevs_discovered": 2, 00:18:35.143 "num_base_bdevs_operational": 2, 00:18:35.143 "base_bdevs_list": [ 00:18:35.143 { 00:18:35.143 "name": "spare", 00:18:35.143 "uuid": "1aaeb342-8c71-541d-bbcd-ddeba6cc615d", 00:18:35.143 "is_configured": true, 00:18:35.143 "data_offset": 256, 00:18:35.143 "data_size": 7936 00:18:35.143 }, 00:18:35.143 { 00:18:35.143 "name": "BaseBdev2", 00:18:35.143 "uuid": "262f0d79-a942-5316-8c07-d09d662f0d24", 00:18:35.143 "is_configured": true, 00:18:35.143 "data_offset": 256, 00:18:35.143 "data_size": 7936 00:18:35.143 } 00:18:35.143 ] 00:18:35.143 }' 00:18:35.143 12:10:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:35.143 12:10:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:35.143 12:10:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:35.143 12:10:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:35.143 12:10:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.143 12:10:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.143 12:10:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:35.143 12:10:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:35.143 12:10:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.143 12:10:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:35.143 12:10:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:35.143 12:10:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.143 12:10:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:35.143 [2024-11-19 12:10:38.416889] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:35.143 12:10:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.143 12:10:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:35.143 12:10:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:35.143 12:10:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:35.143 12:10:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:35.143 12:10:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:35.143 12:10:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:35.143 12:10:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:35.143 12:10:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:35.143 12:10:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:35.143 12:10:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:35.143 12:10:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.143 12:10:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.143 12:10:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.144 12:10:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:35.144 12:10:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.144 12:10:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:35.144 "name": "raid_bdev1", 00:18:35.144 "uuid": "b1f04349-7a79-4f47-888c-f5f190785fe6", 00:18:35.144 "strip_size_kb": 0, 00:18:35.144 "state": "online", 00:18:35.144 "raid_level": "raid1", 00:18:35.144 "superblock": true, 00:18:35.144 "num_base_bdevs": 2, 00:18:35.144 "num_base_bdevs_discovered": 1, 00:18:35.144 "num_base_bdevs_operational": 1, 00:18:35.144 "base_bdevs_list": [ 00:18:35.144 { 00:18:35.144 "name": null, 00:18:35.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.144 "is_configured": false, 00:18:35.144 "data_offset": 0, 00:18:35.144 "data_size": 7936 00:18:35.144 }, 00:18:35.144 { 00:18:35.144 "name": "BaseBdev2", 00:18:35.144 "uuid": "262f0d79-a942-5316-8c07-d09d662f0d24", 00:18:35.144 "is_configured": true, 00:18:35.144 "data_offset": 256, 00:18:35.144 "data_size": 7936 00:18:35.144 } 00:18:35.144 ] 00:18:35.144 }' 00:18:35.144 12:10:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:35.144 12:10:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:35.714 12:10:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:35.714 12:10:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.714 12:10:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:35.714 [2024-11-19 12:10:38.800222] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:35.714 [2024-11-19 12:10:38.800394] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:35.714 [2024-11-19 12:10:38.800416] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:35.714 [2024-11-19 12:10:38.800449] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:35.714 [2024-11-19 12:10:38.815839] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:35.714 12:10:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.714 12:10:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:35.714 [2024-11-19 12:10:38.817628] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:36.652 12:10:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:36.652 12:10:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:36.652 12:10:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:36.652 12:10:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:36.652 12:10:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:36.652 12:10:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.652 12:10:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.652 12:10:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:36.652 12:10:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:36.652 12:10:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.652 12:10:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:36.652 "name": "raid_bdev1", 00:18:36.652 "uuid": "b1f04349-7a79-4f47-888c-f5f190785fe6", 00:18:36.652 "strip_size_kb": 0, 00:18:36.652 "state": "online", 00:18:36.652 "raid_level": "raid1", 00:18:36.652 "superblock": true, 00:18:36.652 "num_base_bdevs": 2, 00:18:36.652 "num_base_bdevs_discovered": 2, 00:18:36.652 "num_base_bdevs_operational": 2, 00:18:36.652 "process": { 00:18:36.652 "type": "rebuild", 00:18:36.652 "target": "spare", 00:18:36.652 "progress": { 00:18:36.652 "blocks": 2560, 00:18:36.652 "percent": 32 00:18:36.652 } 00:18:36.652 }, 00:18:36.652 "base_bdevs_list": [ 00:18:36.652 { 00:18:36.652 "name": "spare", 00:18:36.652 "uuid": "1aaeb342-8c71-541d-bbcd-ddeba6cc615d", 00:18:36.652 "is_configured": true, 00:18:36.652 "data_offset": 256, 00:18:36.652 "data_size": 7936 00:18:36.652 }, 00:18:36.652 { 00:18:36.652 "name": "BaseBdev2", 00:18:36.652 "uuid": "262f0d79-a942-5316-8c07-d09d662f0d24", 00:18:36.652 "is_configured": true, 00:18:36.652 "data_offset": 256, 00:18:36.652 "data_size": 7936 00:18:36.652 } 00:18:36.652 ] 00:18:36.652 }' 00:18:36.652 12:10:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:36.652 12:10:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:36.652 12:10:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:36.652 12:10:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:36.652 12:10:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:36.652 12:10:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.652 12:10:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:36.652 [2024-11-19 12:10:39.977424] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:36.652 [2024-11-19 12:10:40.022340] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:36.652 [2024-11-19 12:10:40.022402] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:36.652 [2024-11-19 12:10:40.022415] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:36.652 [2024-11-19 12:10:40.022424] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:36.912 12:10:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.912 12:10:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:36.912 12:10:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:36.912 12:10:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:36.912 12:10:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:36.912 12:10:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:36.912 12:10:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:36.912 12:10:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:36.912 12:10:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:36.912 12:10:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:36.912 12:10:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:36.912 12:10:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.912 12:10:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:36.912 12:10:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.912 12:10:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:36.912 12:10:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.912 12:10:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:36.912 "name": "raid_bdev1", 00:18:36.912 "uuid": "b1f04349-7a79-4f47-888c-f5f190785fe6", 00:18:36.912 "strip_size_kb": 0, 00:18:36.912 "state": "online", 00:18:36.912 "raid_level": "raid1", 00:18:36.912 "superblock": true, 00:18:36.912 "num_base_bdevs": 2, 00:18:36.912 "num_base_bdevs_discovered": 1, 00:18:36.912 "num_base_bdevs_operational": 1, 00:18:36.912 "base_bdevs_list": [ 00:18:36.912 { 00:18:36.912 "name": null, 00:18:36.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:36.912 "is_configured": false, 00:18:36.912 "data_offset": 0, 00:18:36.912 "data_size": 7936 00:18:36.912 }, 00:18:36.912 { 00:18:36.912 "name": "BaseBdev2", 00:18:36.912 "uuid": "262f0d79-a942-5316-8c07-d09d662f0d24", 00:18:36.912 "is_configured": true, 00:18:36.912 "data_offset": 256, 00:18:36.912 "data_size": 7936 00:18:36.912 } 00:18:36.912 ] 00:18:36.912 }' 00:18:36.912 12:10:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:36.912 12:10:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:37.172 12:10:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:37.172 12:10:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.172 12:10:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:37.172 [2024-11-19 12:10:40.443178] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:37.172 [2024-11-19 12:10:40.443238] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:37.172 [2024-11-19 12:10:40.443261] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:37.172 [2024-11-19 12:10:40.443272] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:37.172 [2024-11-19 12:10:40.443467] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:37.172 [2024-11-19 12:10:40.443489] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:37.172 [2024-11-19 12:10:40.443543] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:37.172 [2024-11-19 12:10:40.443555] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:37.172 [2024-11-19 12:10:40.443564] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:37.172 [2024-11-19 12:10:40.443591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:37.172 [2024-11-19 12:10:40.459364] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:18:37.172 spare 00:18:37.172 12:10:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.172 12:10:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:37.172 [2024-11-19 12:10:40.461179] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:38.112 12:10:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:38.112 12:10:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:38.112 12:10:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:38.112 12:10:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:38.112 12:10:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:38.112 12:10:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.112 12:10:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.112 12:10:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:38.112 12:10:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.372 12:10:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.372 12:10:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:38.372 "name": "raid_bdev1", 00:18:38.372 "uuid": "b1f04349-7a79-4f47-888c-f5f190785fe6", 00:18:38.372 "strip_size_kb": 0, 00:18:38.372 "state": "online", 00:18:38.373 "raid_level": "raid1", 00:18:38.373 "superblock": true, 00:18:38.373 "num_base_bdevs": 2, 00:18:38.373 "num_base_bdevs_discovered": 2, 00:18:38.373 "num_base_bdevs_operational": 2, 00:18:38.373 "process": { 00:18:38.373 "type": "rebuild", 00:18:38.373 "target": "spare", 00:18:38.373 "progress": { 00:18:38.373 "blocks": 2560, 00:18:38.373 "percent": 32 00:18:38.373 } 00:18:38.373 }, 00:18:38.373 "base_bdevs_list": [ 00:18:38.373 { 00:18:38.373 "name": "spare", 00:18:38.373 "uuid": "1aaeb342-8c71-541d-bbcd-ddeba6cc615d", 00:18:38.373 "is_configured": true, 00:18:38.373 "data_offset": 256, 00:18:38.373 "data_size": 7936 00:18:38.373 }, 00:18:38.373 { 00:18:38.373 "name": "BaseBdev2", 00:18:38.373 "uuid": "262f0d79-a942-5316-8c07-d09d662f0d24", 00:18:38.373 "is_configured": true, 00:18:38.373 "data_offset": 256, 00:18:38.373 "data_size": 7936 00:18:38.373 } 00:18:38.373 ] 00:18:38.373 }' 00:18:38.373 12:10:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:38.373 12:10:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:38.373 12:10:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:38.373 12:10:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:38.373 12:10:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:38.373 12:10:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.373 12:10:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.373 [2024-11-19 12:10:41.620968] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:38.373 [2024-11-19 12:10:41.665853] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:38.373 [2024-11-19 12:10:41.665904] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:38.373 [2024-11-19 12:10:41.665918] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:38.373 [2024-11-19 12:10:41.665925] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:38.373 12:10:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.373 12:10:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:38.373 12:10:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:38.373 12:10:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:38.373 12:10:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:38.373 12:10:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:38.373 12:10:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:38.373 12:10:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:38.373 12:10:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:38.373 12:10:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:38.373 12:10:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:38.373 12:10:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.373 12:10:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:38.373 12:10:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.373 12:10:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.373 12:10:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.633 12:10:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:38.633 "name": "raid_bdev1", 00:18:38.633 "uuid": "b1f04349-7a79-4f47-888c-f5f190785fe6", 00:18:38.633 "strip_size_kb": 0, 00:18:38.633 "state": "online", 00:18:38.633 "raid_level": "raid1", 00:18:38.633 "superblock": true, 00:18:38.633 "num_base_bdevs": 2, 00:18:38.633 "num_base_bdevs_discovered": 1, 00:18:38.633 "num_base_bdevs_operational": 1, 00:18:38.633 "base_bdevs_list": [ 00:18:38.633 { 00:18:38.633 "name": null, 00:18:38.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.633 "is_configured": false, 00:18:38.633 "data_offset": 0, 00:18:38.633 "data_size": 7936 00:18:38.633 }, 00:18:38.633 { 00:18:38.633 "name": "BaseBdev2", 00:18:38.633 "uuid": "262f0d79-a942-5316-8c07-d09d662f0d24", 00:18:38.633 "is_configured": true, 00:18:38.633 "data_offset": 256, 00:18:38.633 "data_size": 7936 00:18:38.633 } 00:18:38.633 ] 00:18:38.633 }' 00:18:38.633 12:10:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:38.633 12:10:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.894 12:10:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:38.894 12:10:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:38.894 12:10:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:38.894 12:10:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:38.894 12:10:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:38.894 12:10:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.894 12:10:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.894 12:10:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.894 12:10:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:38.894 12:10:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.894 12:10:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:38.894 "name": "raid_bdev1", 00:18:38.894 "uuid": "b1f04349-7a79-4f47-888c-f5f190785fe6", 00:18:38.894 "strip_size_kb": 0, 00:18:38.894 "state": "online", 00:18:38.894 "raid_level": "raid1", 00:18:38.894 "superblock": true, 00:18:38.894 "num_base_bdevs": 2, 00:18:38.894 "num_base_bdevs_discovered": 1, 00:18:38.894 "num_base_bdevs_operational": 1, 00:18:38.894 "base_bdevs_list": [ 00:18:38.894 { 00:18:38.894 "name": null, 00:18:38.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.894 "is_configured": false, 00:18:38.894 "data_offset": 0, 00:18:38.894 "data_size": 7936 00:18:38.894 }, 00:18:38.894 { 00:18:38.894 "name": "BaseBdev2", 00:18:38.894 "uuid": "262f0d79-a942-5316-8c07-d09d662f0d24", 00:18:38.894 "is_configured": true, 00:18:38.894 "data_offset": 256, 00:18:38.894 "data_size": 7936 00:18:38.894 } 00:18:38.894 ] 00:18:38.894 }' 00:18:38.894 12:10:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:38.894 12:10:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:38.894 12:10:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:39.155 12:10:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:39.155 12:10:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:39.155 12:10:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.155 12:10:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:39.155 12:10:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.155 12:10:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:39.155 12:10:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.155 12:10:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:39.155 [2024-11-19 12:10:42.326057] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:39.155 [2024-11-19 12:10:42.326108] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:39.155 [2024-11-19 12:10:42.326130] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:39.155 [2024-11-19 12:10:42.326139] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:39.155 [2024-11-19 12:10:42.326288] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:39.155 [2024-11-19 12:10:42.326307] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:39.155 [2024-11-19 12:10:42.326369] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:39.155 [2024-11-19 12:10:42.326381] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:39.155 [2024-11-19 12:10:42.326391] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:39.155 [2024-11-19 12:10:42.326400] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:39.155 BaseBdev1 00:18:39.155 12:10:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.155 12:10:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:40.096 12:10:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:40.096 12:10:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:40.096 12:10:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:40.096 12:10:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:40.096 12:10:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:40.096 12:10:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:40.096 12:10:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:40.096 12:10:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:40.096 12:10:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:40.096 12:10:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:40.096 12:10:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.096 12:10:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.096 12:10:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.096 12:10:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:40.096 12:10:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.096 12:10:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:40.096 "name": "raid_bdev1", 00:18:40.096 "uuid": "b1f04349-7a79-4f47-888c-f5f190785fe6", 00:18:40.096 "strip_size_kb": 0, 00:18:40.096 "state": "online", 00:18:40.096 "raid_level": "raid1", 00:18:40.096 "superblock": true, 00:18:40.096 "num_base_bdevs": 2, 00:18:40.096 "num_base_bdevs_discovered": 1, 00:18:40.096 "num_base_bdevs_operational": 1, 00:18:40.096 "base_bdevs_list": [ 00:18:40.096 { 00:18:40.096 "name": null, 00:18:40.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.096 "is_configured": false, 00:18:40.096 "data_offset": 0, 00:18:40.096 "data_size": 7936 00:18:40.096 }, 00:18:40.096 { 00:18:40.096 "name": "BaseBdev2", 00:18:40.096 "uuid": "262f0d79-a942-5316-8c07-d09d662f0d24", 00:18:40.096 "is_configured": true, 00:18:40.096 "data_offset": 256, 00:18:40.096 "data_size": 7936 00:18:40.096 } 00:18:40.096 ] 00:18:40.096 }' 00:18:40.096 12:10:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:40.096 12:10:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:40.667 12:10:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:40.667 12:10:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:40.667 12:10:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:40.667 12:10:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:40.667 12:10:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:40.667 12:10:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.667 12:10:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.667 12:10:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.667 12:10:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:40.667 12:10:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.667 12:10:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:40.667 "name": "raid_bdev1", 00:18:40.667 "uuid": "b1f04349-7a79-4f47-888c-f5f190785fe6", 00:18:40.667 "strip_size_kb": 0, 00:18:40.667 "state": "online", 00:18:40.667 "raid_level": "raid1", 00:18:40.667 "superblock": true, 00:18:40.667 "num_base_bdevs": 2, 00:18:40.667 "num_base_bdevs_discovered": 1, 00:18:40.667 "num_base_bdevs_operational": 1, 00:18:40.667 "base_bdevs_list": [ 00:18:40.667 { 00:18:40.667 "name": null, 00:18:40.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.667 "is_configured": false, 00:18:40.667 "data_offset": 0, 00:18:40.667 "data_size": 7936 00:18:40.667 }, 00:18:40.667 { 00:18:40.667 "name": "BaseBdev2", 00:18:40.667 "uuid": "262f0d79-a942-5316-8c07-d09d662f0d24", 00:18:40.667 "is_configured": true, 00:18:40.667 "data_offset": 256, 00:18:40.667 "data_size": 7936 00:18:40.667 } 00:18:40.667 ] 00:18:40.667 }' 00:18:40.667 12:10:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:40.667 12:10:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:40.667 12:10:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:40.667 12:10:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:40.667 12:10:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:40.667 12:10:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:18:40.667 12:10:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:40.667 12:10:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:40.667 12:10:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:40.667 12:10:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:40.667 12:10:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:40.667 12:10:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:40.667 12:10:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.667 12:10:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:40.667 [2024-11-19 12:10:43.923329] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:40.667 [2024-11-19 12:10:43.923491] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:40.667 [2024-11-19 12:10:43.923513] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:40.667 request: 00:18:40.667 { 00:18:40.667 "base_bdev": "BaseBdev1", 00:18:40.667 "raid_bdev": "raid_bdev1", 00:18:40.667 "method": "bdev_raid_add_base_bdev", 00:18:40.667 "req_id": 1 00:18:40.667 } 00:18:40.667 Got JSON-RPC error response 00:18:40.667 response: 00:18:40.667 { 00:18:40.667 "code": -22, 00:18:40.667 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:40.667 } 00:18:40.667 12:10:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:40.667 12:10:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:18:40.667 12:10:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:40.667 12:10:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:40.667 12:10:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:40.668 12:10:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:41.631 12:10:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:41.631 12:10:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:41.631 12:10:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:41.631 12:10:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:41.631 12:10:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:41.631 12:10:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:41.631 12:10:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:41.631 12:10:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:41.631 12:10:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:41.631 12:10:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:41.631 12:10:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.631 12:10:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.631 12:10:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.631 12:10:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.631 12:10:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.631 12:10:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:41.631 "name": "raid_bdev1", 00:18:41.631 "uuid": "b1f04349-7a79-4f47-888c-f5f190785fe6", 00:18:41.631 "strip_size_kb": 0, 00:18:41.631 "state": "online", 00:18:41.631 "raid_level": "raid1", 00:18:41.631 "superblock": true, 00:18:41.631 "num_base_bdevs": 2, 00:18:41.631 "num_base_bdevs_discovered": 1, 00:18:41.631 "num_base_bdevs_operational": 1, 00:18:41.631 "base_bdevs_list": [ 00:18:41.631 { 00:18:41.631 "name": null, 00:18:41.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.631 "is_configured": false, 00:18:41.631 "data_offset": 0, 00:18:41.631 "data_size": 7936 00:18:41.631 }, 00:18:41.631 { 00:18:41.631 "name": "BaseBdev2", 00:18:41.631 "uuid": "262f0d79-a942-5316-8c07-d09d662f0d24", 00:18:41.631 "is_configured": true, 00:18:41.631 "data_offset": 256, 00:18:41.631 "data_size": 7936 00:18:41.631 } 00:18:41.631 ] 00:18:41.631 }' 00:18:41.631 12:10:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:41.631 12:10:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:42.201 12:10:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:42.201 12:10:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:42.201 12:10:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:42.201 12:10:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:42.201 12:10:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:42.201 12:10:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.202 12:10:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.202 12:10:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.202 12:10:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:42.202 12:10:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.202 12:10:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:42.202 "name": "raid_bdev1", 00:18:42.202 "uuid": "b1f04349-7a79-4f47-888c-f5f190785fe6", 00:18:42.202 "strip_size_kb": 0, 00:18:42.202 "state": "online", 00:18:42.202 "raid_level": "raid1", 00:18:42.202 "superblock": true, 00:18:42.202 "num_base_bdevs": 2, 00:18:42.202 "num_base_bdevs_discovered": 1, 00:18:42.202 "num_base_bdevs_operational": 1, 00:18:42.202 "base_bdevs_list": [ 00:18:42.202 { 00:18:42.202 "name": null, 00:18:42.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:42.202 "is_configured": false, 00:18:42.202 "data_offset": 0, 00:18:42.202 "data_size": 7936 00:18:42.202 }, 00:18:42.202 { 00:18:42.202 "name": "BaseBdev2", 00:18:42.202 "uuid": "262f0d79-a942-5316-8c07-d09d662f0d24", 00:18:42.202 "is_configured": true, 00:18:42.202 "data_offset": 256, 00:18:42.202 "data_size": 7936 00:18:42.202 } 00:18:42.202 ] 00:18:42.202 }' 00:18:42.202 12:10:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:42.202 12:10:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:42.202 12:10:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:42.202 12:10:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:42.202 12:10:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 88950 00:18:42.202 12:10:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88950 ']' 00:18:42.202 12:10:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88950 00:18:42.202 12:10:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:18:42.202 12:10:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:42.202 12:10:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88950 00:18:42.202 killing process with pid 88950 00:18:42.202 Received shutdown signal, test time was about 60.000000 seconds 00:18:42.202 00:18:42.202 Latency(us) 00:18:42.202 [2024-11-19T12:10:45.579Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:42.202 [2024-11-19T12:10:45.579Z] =================================================================================================================== 00:18:42.202 [2024-11-19T12:10:45.579Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:42.202 12:10:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:42.202 12:10:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:42.202 12:10:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88950' 00:18:42.202 12:10:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 88950 00:18:42.202 [2024-11-19 12:10:45.562887] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:42.202 [2024-11-19 12:10:45.563017] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:42.202 [2024-11-19 12:10:45.563065] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:42.202 [2024-11-19 12:10:45.563075] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:42.202 12:10:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 88950 00:18:42.771 [2024-11-19 12:10:45.848291] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:43.713 12:10:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:18:43.713 00:18:43.713 real 0m17.224s 00:18:43.713 user 0m22.500s 00:18:43.713 sys 0m1.679s 00:18:43.713 12:10:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:43.713 12:10:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:43.713 ************************************ 00:18:43.713 END TEST raid_rebuild_test_sb_md_interleaved 00:18:43.713 ************************************ 00:18:43.713 12:10:46 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:18:43.713 12:10:46 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:18:43.713 12:10:46 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 88950 ']' 00:18:43.713 12:10:46 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 88950 00:18:43.713 12:10:46 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:18:43.713 00:18:43.713 real 11m49.166s 00:18:43.713 user 15m58.870s 00:18:43.713 sys 1m51.683s 00:18:43.713 12:10:46 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:43.713 12:10:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:43.713 ************************************ 00:18:43.713 END TEST bdev_raid 00:18:43.713 ************************************ 00:18:43.713 12:10:47 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:18:43.713 12:10:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:43.713 12:10:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:43.713 12:10:47 -- common/autotest_common.sh@10 -- # set +x 00:18:43.713 ************************************ 00:18:43.713 START TEST spdkcli_raid 00:18:43.713 ************************************ 00:18:43.713 12:10:47 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:18:43.974 * Looking for test storage... 00:18:43.974 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:43.974 12:10:47 spdkcli_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:43.974 12:10:47 spdkcli_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:18:43.974 12:10:47 spdkcli_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:43.974 12:10:47 spdkcli_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:43.974 12:10:47 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:43.974 12:10:47 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:43.974 12:10:47 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:43.974 12:10:47 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:18:43.974 12:10:47 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:18:43.974 12:10:47 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:18:43.974 12:10:47 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:18:43.974 12:10:47 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:18:43.974 12:10:47 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:18:43.974 12:10:47 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:18:43.974 12:10:47 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:43.974 12:10:47 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:18:43.974 12:10:47 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:18:43.974 12:10:47 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:43.974 12:10:47 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:43.974 12:10:47 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:18:43.974 12:10:47 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:18:43.974 12:10:47 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:43.974 12:10:47 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:18:43.974 12:10:47 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:18:43.974 12:10:47 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:18:43.974 12:10:47 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:18:43.974 12:10:47 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:43.974 12:10:47 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:18:43.974 12:10:47 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:18:43.974 12:10:47 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:43.974 12:10:47 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:43.974 12:10:47 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:18:43.974 12:10:47 spdkcli_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:43.974 12:10:47 spdkcli_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:43.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.974 --rc genhtml_branch_coverage=1 00:18:43.974 --rc genhtml_function_coverage=1 00:18:43.974 --rc genhtml_legend=1 00:18:43.974 --rc geninfo_all_blocks=1 00:18:43.974 --rc geninfo_unexecuted_blocks=1 00:18:43.974 00:18:43.975 ' 00:18:43.975 12:10:47 spdkcli_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:43.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.975 --rc genhtml_branch_coverage=1 00:18:43.975 --rc genhtml_function_coverage=1 00:18:43.975 --rc genhtml_legend=1 00:18:43.975 --rc geninfo_all_blocks=1 00:18:43.975 --rc geninfo_unexecuted_blocks=1 00:18:43.975 00:18:43.975 ' 00:18:43.975 12:10:47 spdkcli_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:43.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.975 --rc genhtml_branch_coverage=1 00:18:43.975 --rc genhtml_function_coverage=1 00:18:43.975 --rc genhtml_legend=1 00:18:43.975 --rc geninfo_all_blocks=1 00:18:43.975 --rc geninfo_unexecuted_blocks=1 00:18:43.975 00:18:43.975 ' 00:18:43.975 12:10:47 spdkcli_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:43.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.975 --rc genhtml_branch_coverage=1 00:18:43.975 --rc genhtml_function_coverage=1 00:18:43.975 --rc genhtml_legend=1 00:18:43.975 --rc geninfo_all_blocks=1 00:18:43.975 --rc geninfo_unexecuted_blocks=1 00:18:43.975 00:18:43.975 ' 00:18:43.975 12:10:47 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:18:43.975 12:10:47 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:18:43.975 12:10:47 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:18:43.975 12:10:47 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:18:43.975 12:10:47 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:18:43.975 12:10:47 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:18:43.975 12:10:47 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:18:43.975 12:10:47 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:18:43.975 12:10:47 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:18:43.975 12:10:47 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:18:43.975 12:10:47 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:18:43.975 12:10:47 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:18:43.975 12:10:47 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:18:43.975 12:10:47 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:18:43.975 12:10:47 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:18:43.975 12:10:47 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:18:43.975 12:10:47 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:18:43.975 12:10:47 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:18:43.975 12:10:47 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:18:43.975 12:10:47 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:18:43.975 12:10:47 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:18:43.975 12:10:47 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:18:43.975 12:10:47 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:18:43.975 12:10:47 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:18:43.975 12:10:47 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:18:43.975 12:10:47 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:18:43.975 12:10:47 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:43.975 12:10:47 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:43.975 12:10:47 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:18:43.975 12:10:47 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:18:43.975 12:10:47 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:18:43.975 12:10:47 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:18:43.975 12:10:47 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:18:43.975 12:10:47 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:43.975 12:10:47 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:43.975 12:10:47 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:18:43.975 12:10:47 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=89626 00:18:43.975 12:10:47 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:18:43.975 12:10:47 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 89626 00:18:43.975 12:10:47 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 89626 ']' 00:18:43.975 12:10:47 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:43.975 12:10:47 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:43.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:43.975 12:10:47 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:43.975 12:10:47 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:43.975 12:10:47 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:44.236 [2024-11-19 12:10:47.388477] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:18:44.236 [2024-11-19 12:10:47.388609] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89626 ] 00:18:44.236 [2024-11-19 12:10:47.561966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:44.497 [2024-11-19 12:10:47.673030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:44.497 [2024-11-19 12:10:47.673088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:45.443 12:10:48 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:45.443 12:10:48 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:18:45.443 12:10:48 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:18:45.443 12:10:48 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:45.443 12:10:48 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:45.443 12:10:48 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:18:45.443 12:10:48 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:45.443 12:10:48 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:45.443 12:10:48 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:18:45.443 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:18:45.443 ' 00:18:46.882 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:18:46.882 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:18:46.882 12:10:50 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:18:46.882 12:10:50 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:46.882 12:10:50 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:47.142 12:10:50 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:18:47.142 12:10:50 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:47.142 12:10:50 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:47.142 12:10:50 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:18:47.142 ' 00:18:48.083 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:18:48.083 12:10:51 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:18:48.083 12:10:51 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:48.083 12:10:51 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:48.083 12:10:51 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:18:48.083 12:10:51 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:48.083 12:10:51 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:48.343 12:10:51 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:18:48.343 12:10:51 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:18:48.603 12:10:51 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:18:48.862 12:10:51 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:18:48.862 12:10:51 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:18:48.862 12:10:51 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:48.862 12:10:51 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:48.862 12:10:52 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:18:48.862 12:10:52 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:48.862 12:10:52 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:48.862 12:10:52 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:18:48.862 ' 00:18:49.803 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:18:49.803 12:10:53 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:18:49.803 12:10:53 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:49.803 12:10:53 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:49.803 12:10:53 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:18:49.803 12:10:53 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:49.803 12:10:53 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:49.803 12:10:53 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:18:49.803 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:18:49.803 ' 00:18:51.185 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:18:51.185 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:18:51.445 12:10:54 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:18:51.445 12:10:54 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:51.445 12:10:54 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:51.446 12:10:54 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 89626 00:18:51.446 12:10:54 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 89626 ']' 00:18:51.446 12:10:54 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 89626 00:18:51.446 12:10:54 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:18:51.446 12:10:54 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:51.446 12:10:54 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89626 00:18:51.446 12:10:54 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:51.446 12:10:54 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:51.446 12:10:54 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89626' 00:18:51.446 killing process with pid 89626 00:18:51.446 12:10:54 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 89626 00:18:51.446 12:10:54 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 89626 00:18:53.989 12:10:56 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:18:53.989 12:10:56 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 89626 ']' 00:18:53.989 12:10:56 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 89626 00:18:53.989 12:10:56 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 89626 ']' 00:18:53.989 12:10:56 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 89626 00:18:53.989 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (89626) - No such process 00:18:53.989 Process with pid 89626 is not found 00:18:53.989 12:10:56 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 89626 is not found' 00:18:53.989 12:10:56 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:18:53.989 12:10:56 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:18:53.989 12:10:56 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:18:53.989 12:10:56 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:18:53.989 00:18:53.989 real 0m9.943s 00:18:53.989 user 0m20.441s 00:18:53.989 sys 0m1.173s 00:18:53.989 12:10:56 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:53.989 12:10:56 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:53.989 ************************************ 00:18:53.989 END TEST spdkcli_raid 00:18:53.989 ************************************ 00:18:53.989 12:10:57 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:18:53.989 12:10:57 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:53.989 12:10:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:53.989 12:10:57 -- common/autotest_common.sh@10 -- # set +x 00:18:53.989 ************************************ 00:18:53.989 START TEST blockdev_raid5f 00:18:53.989 ************************************ 00:18:53.989 12:10:57 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:18:53.989 * Looking for test storage... 00:18:53.989 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:18:53.989 12:10:57 blockdev_raid5f -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:53.989 12:10:57 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lcov --version 00:18:53.989 12:10:57 blockdev_raid5f -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:53.989 12:10:57 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:53.989 12:10:57 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:53.989 12:10:57 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:53.990 12:10:57 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:53.990 12:10:57 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:18:53.990 12:10:57 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:18:53.990 12:10:57 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:18:53.990 12:10:57 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:18:53.990 12:10:57 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:18:53.990 12:10:57 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:18:53.990 12:10:57 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:18:53.990 12:10:57 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:53.990 12:10:57 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:18:53.990 12:10:57 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:18:53.990 12:10:57 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:53.990 12:10:57 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:53.990 12:10:57 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:18:53.990 12:10:57 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:18:53.990 12:10:57 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:53.990 12:10:57 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:18:53.990 12:10:57 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:18:53.990 12:10:57 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:18:53.990 12:10:57 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:18:53.990 12:10:57 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:53.990 12:10:57 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:18:53.990 12:10:57 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:18:53.990 12:10:57 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:53.990 12:10:57 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:53.990 12:10:57 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:18:53.990 12:10:57 blockdev_raid5f -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:53.990 12:10:57 blockdev_raid5f -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:53.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:53.990 --rc genhtml_branch_coverage=1 00:18:53.990 --rc genhtml_function_coverage=1 00:18:53.990 --rc genhtml_legend=1 00:18:53.990 --rc geninfo_all_blocks=1 00:18:53.990 --rc geninfo_unexecuted_blocks=1 00:18:53.990 00:18:53.990 ' 00:18:53.990 12:10:57 blockdev_raid5f -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:53.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:53.990 --rc genhtml_branch_coverage=1 00:18:53.990 --rc genhtml_function_coverage=1 00:18:53.990 --rc genhtml_legend=1 00:18:53.990 --rc geninfo_all_blocks=1 00:18:53.990 --rc geninfo_unexecuted_blocks=1 00:18:53.990 00:18:53.990 ' 00:18:53.990 12:10:57 blockdev_raid5f -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:53.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:53.990 --rc genhtml_branch_coverage=1 00:18:53.990 --rc genhtml_function_coverage=1 00:18:53.990 --rc genhtml_legend=1 00:18:53.990 --rc geninfo_all_blocks=1 00:18:53.990 --rc geninfo_unexecuted_blocks=1 00:18:53.990 00:18:53.990 ' 00:18:53.990 12:10:57 blockdev_raid5f -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:53.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:53.990 --rc genhtml_branch_coverage=1 00:18:53.990 --rc genhtml_function_coverage=1 00:18:53.990 --rc genhtml_legend=1 00:18:53.990 --rc geninfo_all_blocks=1 00:18:53.990 --rc geninfo_unexecuted_blocks=1 00:18:53.990 00:18:53.990 ' 00:18:53.990 12:10:57 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:18:53.990 12:10:57 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:18:53.990 12:10:57 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:18:53.990 12:10:57 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:53.990 12:10:57 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:18:53.990 12:10:57 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:18:53.990 12:10:57 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:18:53.990 12:10:57 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:18:53.990 12:10:57 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:18:53.990 12:10:57 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:18:53.990 12:10:57 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:18:53.990 12:10:57 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:18:53.990 12:10:57 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:18:53.990 12:10:57 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:18:53.990 12:10:57 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:18:53.990 12:10:57 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:18:53.990 12:10:57 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:18:53.990 12:10:57 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:18:53.990 12:10:57 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:18:53.990 12:10:57 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:18:53.990 12:10:57 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:18:53.990 12:10:57 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:18:53.990 12:10:57 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:18:53.990 12:10:57 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:18:53.990 12:10:57 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=89901 00:18:53.990 12:10:57 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:18:53.990 12:10:57 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:18:53.990 12:10:57 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 89901 00:18:53.990 12:10:57 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 89901 ']' 00:18:53.990 12:10:57 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:53.990 12:10:57 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:53.990 12:10:57 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:53.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:53.990 12:10:57 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:53.990 12:10:57 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:54.250 [2024-11-19 12:10:57.394453] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:18:54.250 [2024-11-19 12:10:57.394552] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89901 ] 00:18:54.250 [2024-11-19 12:10:57.566035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:54.511 [2024-11-19 12:10:57.673091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:55.451 12:10:58 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:55.451 12:10:58 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:18:55.451 12:10:58 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:18:55.451 12:10:58 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:18:55.451 12:10:58 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:18:55.451 12:10:58 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.451 12:10:58 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:55.451 Malloc0 00:18:55.451 Malloc1 00:18:55.451 Malloc2 00:18:55.451 12:10:58 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.451 12:10:58 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:18:55.451 12:10:58 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.451 12:10:58 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:55.451 12:10:58 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.451 12:10:58 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:18:55.451 12:10:58 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:18:55.451 12:10:58 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.451 12:10:58 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:55.451 12:10:58 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.451 12:10:58 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:18:55.451 12:10:58 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.452 12:10:58 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:55.452 12:10:58 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.452 12:10:58 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:18:55.452 12:10:58 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.452 12:10:58 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:55.452 12:10:58 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.452 12:10:58 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:18:55.452 12:10:58 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:18:55.452 12:10:58 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.452 12:10:58 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:18:55.452 12:10:58 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:55.452 12:10:58 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.452 12:10:58 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:18:55.452 12:10:58 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:18:55.452 12:10:58 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "c9a82f7c-097d-4ff6-a8b1-39461a265755"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "c9a82f7c-097d-4ff6-a8b1-39461a265755",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "c9a82f7c-097d-4ff6-a8b1-39461a265755",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "e91f4cb2-b8cb-4298-83bc-73c408a25f32",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "7cf4ea81-e74b-4c5c-99e7-e9a68324141e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "77e04575-3183-490a-96d8-e040e7ffd3fa",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:18:55.452 12:10:58 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:18:55.452 12:10:58 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:18:55.452 12:10:58 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:18:55.452 12:10:58 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 89901 00:18:55.452 12:10:58 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 89901 ']' 00:18:55.452 12:10:58 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 89901 00:18:55.452 12:10:58 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:18:55.452 12:10:58 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:55.452 12:10:58 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89901 00:18:55.452 12:10:58 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:55.452 12:10:58 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:55.452 killing process with pid 89901 00:18:55.452 12:10:58 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89901' 00:18:55.452 12:10:58 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 89901 00:18:55.452 12:10:58 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 89901 00:18:57.994 12:11:01 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:57.994 12:11:01 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:18:57.994 12:11:01 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:57.994 12:11:01 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:57.994 12:11:01 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:57.994 ************************************ 00:18:57.994 START TEST bdev_hello_world 00:18:57.994 ************************************ 00:18:57.994 12:11:01 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:18:58.254 [2024-11-19 12:11:01.389127] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:18:58.254 [2024-11-19 12:11:01.389250] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89965 ] 00:18:58.254 [2024-11-19 12:11:01.566474] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:58.514 [2024-11-19 12:11:01.673271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:59.085 [2024-11-19 12:11:02.192889] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:18:59.085 [2024-11-19 12:11:02.192939] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:18:59.085 [2024-11-19 12:11:02.192956] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:18:59.085 [2024-11-19 12:11:02.193467] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:18:59.085 [2024-11-19 12:11:02.193608] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:18:59.085 [2024-11-19 12:11:02.193631] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:18:59.085 [2024-11-19 12:11:02.193678] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:18:59.085 00:18:59.085 [2024-11-19 12:11:02.193697] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:19:00.468 00:19:00.468 real 0m2.190s 00:19:00.468 user 0m1.806s 00:19:00.468 sys 0m0.263s 00:19:00.468 12:11:03 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:00.468 12:11:03 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:19:00.468 ************************************ 00:19:00.468 END TEST bdev_hello_world 00:19:00.468 ************************************ 00:19:00.468 12:11:03 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:19:00.468 12:11:03 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:00.468 12:11:03 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:00.468 12:11:03 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:00.468 ************************************ 00:19:00.468 START TEST bdev_bounds 00:19:00.468 ************************************ 00:19:00.468 12:11:03 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:19:00.468 12:11:03 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=90007 00:19:00.468 12:11:03 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:00.468 12:11:03 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:19:00.468 12:11:03 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 90007' 00:19:00.468 Process bdevio pid: 90007 00:19:00.468 12:11:03 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 90007 00:19:00.468 12:11:03 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 90007 ']' 00:19:00.468 12:11:03 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:00.468 12:11:03 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:00.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:00.468 12:11:03 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:00.468 12:11:03 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:00.468 12:11:03 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:00.468 [2024-11-19 12:11:03.653206] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:19:00.468 [2024-11-19 12:11:03.653331] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90007 ] 00:19:00.468 [2024-11-19 12:11:03.826224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:00.728 [2024-11-19 12:11:03.932102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:00.728 [2024-11-19 12:11:03.932222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:00.728 [2024-11-19 12:11:03.932257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:01.298 12:11:04 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:01.298 12:11:04 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:19:01.298 12:11:04 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:19:01.298 I/O targets: 00:19:01.298 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:19:01.298 00:19:01.298 00:19:01.298 CUnit - A unit testing framework for C - Version 2.1-3 00:19:01.298 http://cunit.sourceforge.net/ 00:19:01.298 00:19:01.298 00:19:01.298 Suite: bdevio tests on: raid5f 00:19:01.298 Test: blockdev write read block ...passed 00:19:01.298 Test: blockdev write zeroes read block ...passed 00:19:01.298 Test: blockdev write zeroes read no split ...passed 00:19:01.557 Test: blockdev write zeroes read split ...passed 00:19:01.557 Test: blockdev write zeroes read split partial ...passed 00:19:01.557 Test: blockdev reset ...passed 00:19:01.557 Test: blockdev write read 8 blocks ...passed 00:19:01.557 Test: blockdev write read size > 128k ...passed 00:19:01.557 Test: blockdev write read invalid size ...passed 00:19:01.557 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:01.557 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:01.557 Test: blockdev write read max offset ...passed 00:19:01.557 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:01.557 Test: blockdev writev readv 8 blocks ...passed 00:19:01.557 Test: blockdev writev readv 30 x 1block ...passed 00:19:01.557 Test: blockdev writev readv block ...passed 00:19:01.557 Test: blockdev writev readv size > 128k ...passed 00:19:01.557 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:01.557 Test: blockdev comparev and writev ...passed 00:19:01.557 Test: blockdev nvme passthru rw ...passed 00:19:01.557 Test: blockdev nvme passthru vendor specific ...passed 00:19:01.557 Test: blockdev nvme admin passthru ...passed 00:19:01.557 Test: blockdev copy ...passed 00:19:01.557 00:19:01.557 Run Summary: Type Total Ran Passed Failed Inactive 00:19:01.557 suites 1 1 n/a 0 0 00:19:01.557 tests 23 23 23 0 0 00:19:01.557 asserts 130 130 130 0 n/a 00:19:01.557 00:19:01.557 Elapsed time = 0.637 seconds 00:19:01.557 0 00:19:01.557 12:11:04 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 90007 00:19:01.557 12:11:04 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 90007 ']' 00:19:01.557 12:11:04 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 90007 00:19:01.557 12:11:04 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:19:01.557 12:11:04 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:01.557 12:11:04 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90007 00:19:01.557 12:11:04 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:01.557 12:11:04 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:01.557 killing process with pid 90007 00:19:01.557 12:11:04 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90007' 00:19:01.557 12:11:04 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 90007 00:19:01.557 12:11:04 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 90007 00:19:02.941 12:11:06 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:19:02.941 00:19:02.941 real 0m2.659s 00:19:02.941 user 0m6.593s 00:19:02.941 sys 0m0.395s 00:19:02.941 12:11:06 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:02.941 12:11:06 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:02.941 ************************************ 00:19:02.941 END TEST bdev_bounds 00:19:02.941 ************************************ 00:19:02.941 12:11:06 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:19:02.941 12:11:06 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:02.941 12:11:06 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:02.941 12:11:06 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:02.941 ************************************ 00:19:02.941 START TEST bdev_nbd 00:19:02.941 ************************************ 00:19:02.941 12:11:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:19:02.941 12:11:06 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:19:02.941 12:11:06 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:19:02.941 12:11:06 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:02.941 12:11:06 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:02.941 12:11:06 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:19:02.941 12:11:06 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:19:02.941 12:11:06 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:19:02.941 12:11:06 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:19:02.941 12:11:06 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:19:02.941 12:11:06 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:19:02.941 12:11:06 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:19:02.941 12:11:06 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:19:02.941 12:11:06 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:19:02.941 12:11:06 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:19:02.941 12:11:06 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:19:02.941 12:11:06 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=90067 00:19:02.941 12:11:06 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:19:02.941 12:11:06 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:02.941 12:11:06 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 90067 /var/tmp/spdk-nbd.sock 00:19:02.941 12:11:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 90067 ']' 00:19:02.941 12:11:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:19:02.941 12:11:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:02.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:19:02.941 12:11:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:19:02.941 12:11:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:02.941 12:11:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:03.201 [2024-11-19 12:11:06.397920] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:19:03.201 [2024-11-19 12:11:06.398058] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:03.201 [2024-11-19 12:11:06.574350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:03.461 [2024-11-19 12:11:06.683000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:04.032 12:11:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:04.032 12:11:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:19:04.032 12:11:07 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:19:04.032 12:11:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:04.032 12:11:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:19:04.032 12:11:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:19:04.032 12:11:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:19:04.032 12:11:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:04.032 12:11:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:19:04.032 12:11:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:19:04.032 12:11:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:19:04.032 12:11:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:19:04.032 12:11:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:19:04.032 12:11:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:19:04.032 12:11:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:19:04.293 12:11:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:19:04.293 12:11:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:19:04.293 12:11:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:19:04.293 12:11:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:04.293 12:11:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:04.293 12:11:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:04.293 12:11:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:04.293 12:11:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:04.293 12:11:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:04.294 12:11:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:04.294 12:11:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:04.294 12:11:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:04.294 1+0 records in 00:19:04.294 1+0 records out 00:19:04.294 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000638231 s, 6.4 MB/s 00:19:04.294 12:11:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:04.294 12:11:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:04.294 12:11:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:04.294 12:11:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:04.294 12:11:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:04.294 12:11:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:04.294 12:11:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:19:04.294 12:11:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:04.294 12:11:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:19:04.294 { 00:19:04.294 "nbd_device": "/dev/nbd0", 00:19:04.294 "bdev_name": "raid5f" 00:19:04.294 } 00:19:04.294 ]' 00:19:04.294 12:11:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:19:04.294 12:11:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:19:04.294 { 00:19:04.294 "nbd_device": "/dev/nbd0", 00:19:04.294 "bdev_name": "raid5f" 00:19:04.294 } 00:19:04.294 ]' 00:19:04.294 12:11:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:19:04.554 12:11:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:04.554 12:11:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:04.554 12:11:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:04.554 12:11:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:04.554 12:11:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:04.554 12:11:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:04.554 12:11:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:04.554 12:11:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:04.554 12:11:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:04.814 12:11:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:04.814 12:11:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:04.814 12:11:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:04.814 12:11:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:04.814 12:11:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:04.814 12:11:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:04.814 12:11:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:04.814 12:11:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:04.814 12:11:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:04.814 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:04.814 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:04.814 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:04.814 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:04.814 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:04.814 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:04.814 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:04.814 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:04.814 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:05.075 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:19:05.075 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:19:05.075 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:19:05.075 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:19:05.075 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:05.075 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:19:05.075 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:19:05.075 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:19:05.075 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:19:05.075 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:19:05.075 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:05.075 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:19:05.075 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:05.075 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:05.075 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:05.075 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:19:05.075 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:05.075 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:05.075 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:19:05.075 /dev/nbd0 00:19:05.075 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:05.075 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:05.075 12:11:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:05.075 12:11:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:05.075 12:11:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:05.075 12:11:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:05.075 12:11:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:05.075 12:11:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:05.075 12:11:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:05.075 12:11:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:05.075 12:11:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:05.075 1+0 records in 00:19:05.075 1+0 records out 00:19:05.075 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000409021 s, 10.0 MB/s 00:19:05.075 12:11:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:05.075 12:11:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:05.075 12:11:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:05.075 12:11:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:05.075 12:11:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:05.075 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:05.075 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:05.335 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:05.335 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:05.335 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:05.335 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:19:05.335 { 00:19:05.335 "nbd_device": "/dev/nbd0", 00:19:05.335 "bdev_name": "raid5f" 00:19:05.335 } 00:19:05.335 ]' 00:19:05.335 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:05.335 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:19:05.335 { 00:19:05.335 "nbd_device": "/dev/nbd0", 00:19:05.335 "bdev_name": "raid5f" 00:19:05.335 } 00:19:05.335 ]' 00:19:05.335 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:19:05.335 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:19:05.335 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:05.335 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:19:05.335 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:19:05.335 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:19:05.335 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:19:05.336 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:19:05.336 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:19:05.336 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:05.336 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:19:05.336 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:05.336 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:19:05.336 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:19:05.596 256+0 records in 00:19:05.596 256+0 records out 00:19:05.596 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0125084 s, 83.8 MB/s 00:19:05.596 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:05.596 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:19:05.596 256+0 records in 00:19:05.596 256+0 records out 00:19:05.596 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0316533 s, 33.1 MB/s 00:19:05.596 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:19:05.596 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:19:05.596 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:05.596 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:19:05.596 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:05.596 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:19:05.596 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:19:05.596 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:05.596 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:19:05.596 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:05.596 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:05.596 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:05.596 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:05.596 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:05.596 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:05.596 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:05.596 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:05.856 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:05.856 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:05.856 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:05.856 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:05.856 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:05.856 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:05.856 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:05.856 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:05.856 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:05.856 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:05.856 12:11:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:05.856 12:11:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:05.856 12:11:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:05.856 12:11:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:06.115 12:11:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:06.115 12:11:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:06.115 12:11:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:06.115 12:11:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:06.115 12:11:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:06.115 12:11:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:06.115 12:11:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:19:06.115 12:11:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:19:06.115 12:11:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:19:06.115 12:11:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:06.115 12:11:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:06.115 12:11:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:19:06.115 12:11:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:19:06.115 malloc_lvol_verify 00:19:06.115 12:11:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:19:06.374 8d732aca-5896-4bdc-9009-31aa49118bed 00:19:06.374 12:11:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:19:06.657 d19d4715-c75b-4072-922d-aece1e4e55d5 00:19:06.657 12:11:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:19:06.917 /dev/nbd0 00:19:06.917 12:11:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:19:06.917 12:11:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:19:06.917 12:11:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:19:06.917 12:11:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:19:06.917 12:11:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:19:06.917 mke2fs 1.47.0 (5-Feb-2023) 00:19:06.917 Discarding device blocks: 0/4096 done 00:19:06.917 Creating filesystem with 4096 1k blocks and 1024 inodes 00:19:06.917 00:19:06.917 Allocating group tables: 0/1 done 00:19:06.917 Writing inode tables: 0/1 done 00:19:06.917 Creating journal (1024 blocks): done 00:19:06.917 Writing superblocks and filesystem accounting information: 0/1 done 00:19:06.917 00:19:06.917 12:11:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:06.918 12:11:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:06.918 12:11:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:06.918 12:11:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:06.918 12:11:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:06.918 12:11:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:06.918 12:11:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:06.918 12:11:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:06.918 12:11:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:06.918 12:11:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:06.918 12:11:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:06.918 12:11:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:06.918 12:11:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:06.918 12:11:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:06.918 12:11:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:06.918 12:11:10 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 90067 00:19:06.918 12:11:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 90067 ']' 00:19:06.918 12:11:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 90067 00:19:06.918 12:11:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:19:06.918 12:11:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:07.178 12:11:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90067 00:19:07.178 12:11:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:07.178 12:11:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:07.178 killing process with pid 90067 00:19:07.178 12:11:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90067' 00:19:07.178 12:11:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 90067 00:19:07.178 12:11:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 90067 00:19:08.639 12:11:11 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:19:08.639 00:19:08.639 real 0m5.404s 00:19:08.639 user 0m7.264s 00:19:08.639 sys 0m1.302s 00:19:08.639 12:11:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:08.639 12:11:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:08.639 ************************************ 00:19:08.639 END TEST bdev_nbd 00:19:08.639 ************************************ 00:19:08.639 12:11:11 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:19:08.639 12:11:11 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:19:08.639 12:11:11 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:19:08.639 12:11:11 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:19:08.639 12:11:11 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:08.639 12:11:11 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:08.639 12:11:11 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:08.639 ************************************ 00:19:08.639 START TEST bdev_fio 00:19:08.639 ************************************ 00:19:08.639 12:11:11 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:19:08.639 12:11:11 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:19:08.639 12:11:11 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:19:08.639 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:19:08.639 12:11:11 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:19:08.639 12:11:11 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:19:08.639 12:11:11 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:19:08.639 12:11:11 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:19:08.639 12:11:11 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:19:08.639 12:11:11 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:08.639 12:11:11 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:19:08.639 12:11:11 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:19:08.639 12:11:11 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:19:08.639 12:11:11 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:19:08.639 12:11:11 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:08.639 12:11:11 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:19:08.639 12:11:11 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:19:08.639 12:11:11 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:08.639 12:11:11 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:19:08.639 12:11:11 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:19:08.639 12:11:11 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:19:08.639 12:11:11 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:19:08.639 12:11:11 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:19:08.639 12:11:11 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:19:08.639 12:11:11 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:19:08.639 12:11:11 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:08.639 12:11:11 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:19:08.639 12:11:11 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:19:08.640 12:11:11 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:19:08.640 12:11:11 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:08.640 12:11:11 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:19:08.640 12:11:11 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:08.640 12:11:11 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:08.640 ************************************ 00:19:08.640 START TEST bdev_fio_rw_verify 00:19:08.640 ************************************ 00:19:08.640 12:11:11 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:08.640 12:11:11 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:08.640 12:11:11 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:08.640 12:11:11 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:08.640 12:11:11 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:08.640 12:11:11 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:08.640 12:11:11 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:19:08.640 12:11:11 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:08.640 12:11:11 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:08.640 12:11:11 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:08.640 12:11:11 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:19:08.640 12:11:11 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:08.640 12:11:12 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:08.640 12:11:12 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:08.640 12:11:12 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:19:08.640 12:11:12 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:08.640 12:11:12 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:08.901 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:08.901 fio-3.35 00:19:08.901 Starting 1 thread 00:19:21.132 00:19:21.132 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90273: Tue Nov 19 12:11:23 2024 00:19:21.132 read: IOPS=12.4k, BW=48.6MiB/s (51.0MB/s)(486MiB/10001msec) 00:19:21.132 slat (usec): min=16, max=399, avg=18.93, stdev= 3.55 00:19:21.132 clat (usec): min=10, max=984, avg=129.61, stdev=48.34 00:19:21.132 lat (usec): min=29, max=1061, avg=148.53, stdev=49.47 00:19:21.132 clat percentiles (usec): 00:19:21.132 | 50.000th=[ 133], 99.000th=[ 219], 99.900th=[ 396], 99.990th=[ 938], 00:19:21.132 | 99.999th=[ 963] 00:19:21.132 write: IOPS=13.0k, BW=50.7MiB/s (53.2MB/s)(501MiB/9878msec); 0 zone resets 00:19:21.132 slat (usec): min=7, max=240, avg=16.26, stdev= 3.73 00:19:21.132 clat (usec): min=59, max=1636, avg=296.25, stdev=43.36 00:19:21.132 lat (usec): min=73, max=1837, avg=312.51, stdev=44.45 00:19:21.132 clat percentiles (usec): 00:19:21.132 | 50.000th=[ 302], 99.000th=[ 375], 99.900th=[ 627], 99.990th=[ 1418], 00:19:21.132 | 99.999th=[ 1582] 00:19:21.132 bw ( KiB/s): min=47696, max=54824, per=98.84%, avg=51346.95, stdev=1817.09, samples=19 00:19:21.132 iops : min=11924, max=13706, avg=12836.74, stdev=454.27, samples=19 00:19:21.132 lat (usec) : 20=0.01%, 50=0.01%, 100=16.21%, 250=39.34%, 500=44.34% 00:19:21.132 lat (usec) : 750=0.07%, 1000=0.03% 00:19:21.132 lat (msec) : 2=0.01% 00:19:21.132 cpu : usr=98.94%, sys=0.31%, ctx=27, majf=0, minf=10145 00:19:21.132 IO depths : 1=7.6%, 2=19.8%, 4=55.2%, 8=17.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:21.132 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.132 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.132 issued rwts: total=124412,128290,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.132 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:21.132 00:19:21.132 Run status group 0 (all jobs): 00:19:21.132 READ: bw=48.6MiB/s (51.0MB/s), 48.6MiB/s-48.6MiB/s (51.0MB/s-51.0MB/s), io=486MiB (510MB), run=10001-10001msec 00:19:21.132 WRITE: bw=50.7MiB/s (53.2MB/s), 50.7MiB/s-50.7MiB/s (53.2MB/s-53.2MB/s), io=501MiB (525MB), run=9878-9878msec 00:19:21.392 ----------------------------------------------------- 00:19:21.392 Suppressions used: 00:19:21.392 count bytes template 00:19:21.392 1 7 /usr/src/fio/parse.c 00:19:21.392 33 3168 /usr/src/fio/iolog.c 00:19:21.392 1 8 libtcmalloc_minimal.so 00:19:21.392 1 904 libcrypto.so 00:19:21.392 ----------------------------------------------------- 00:19:21.392 00:19:21.652 00:19:21.652 real 0m12.840s 00:19:21.652 user 0m13.047s 00:19:21.652 sys 0m0.679s 00:19:21.652 12:11:24 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:21.652 12:11:24 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:19:21.652 ************************************ 00:19:21.652 END TEST bdev_fio_rw_verify 00:19:21.652 ************************************ 00:19:21.652 12:11:24 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:19:21.652 12:11:24 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:21.652 12:11:24 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:19:21.652 12:11:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:21.652 12:11:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:19:21.652 12:11:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:19:21.652 12:11:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:19:21.652 12:11:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:19:21.652 12:11:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:21.652 12:11:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:19:21.652 12:11:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:19:21.652 12:11:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:21.652 12:11:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:19:21.652 12:11:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:19:21.652 12:11:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:19:21.652 12:11:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:19:21.652 12:11:24 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "c9a82f7c-097d-4ff6-a8b1-39461a265755"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "c9a82f7c-097d-4ff6-a8b1-39461a265755",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "c9a82f7c-097d-4ff6-a8b1-39461a265755",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "e91f4cb2-b8cb-4298-83bc-73c408a25f32",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "7cf4ea81-e74b-4c5c-99e7-e9a68324141e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "77e04575-3183-490a-96d8-e040e7ffd3fa",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:19:21.652 12:11:24 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:19:21.652 12:11:24 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:19:21.652 12:11:24 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:21.652 /home/vagrant/spdk_repo/spdk 00:19:21.652 12:11:24 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:19:21.652 12:11:24 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:19:21.652 12:11:24 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:19:21.652 00:19:21.652 real 0m13.153s 00:19:21.652 user 0m13.172s 00:19:21.652 sys 0m0.836s 00:19:21.652 12:11:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:21.652 12:11:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:21.652 ************************************ 00:19:21.652 END TEST bdev_fio 00:19:21.652 ************************************ 00:19:21.652 12:11:24 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:21.652 12:11:24 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:21.652 12:11:24 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:19:21.652 12:11:24 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:21.652 12:11:24 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:21.652 ************************************ 00:19:21.652 START TEST bdev_verify 00:19:21.652 ************************************ 00:19:21.652 12:11:24 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:21.911 [2024-11-19 12:11:25.094088] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:19:21.911 [2024-11-19 12:11:25.094203] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90436 ] 00:19:21.911 [2024-11-19 12:11:25.273099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:22.169 [2024-11-19 12:11:25.408316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:22.170 [2024-11-19 12:11:25.408343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:22.740 Running I/O for 5 seconds... 00:19:25.064 10949.00 IOPS, 42.77 MiB/s [2024-11-19T12:11:29.391Z] 11007.50 IOPS, 43.00 MiB/s [2024-11-19T12:11:30.335Z] 11029.67 IOPS, 43.08 MiB/s [2024-11-19T12:11:31.277Z] 10988.25 IOPS, 42.92 MiB/s [2024-11-19T12:11:31.277Z] 10985.80 IOPS, 42.91 MiB/s 00:19:27.900 Latency(us) 00:19:27.900 [2024-11-19T12:11:31.277Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:27.900 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:27.900 Verification LBA range: start 0x0 length 0x2000 00:19:27.900 raid5f : 5.02 6597.70 25.77 0.00 0.00 29244.18 104.19 20719.68 00:19:27.900 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:27.900 Verification LBA range: start 0x2000 length 0x2000 00:19:27.900 raid5f : 5.01 4374.59 17.09 0.00 0.00 44095.62 155.61 31823.59 00:19:27.900 [2024-11-19T12:11:31.277Z] =================================================================================================================== 00:19:27.900 [2024-11-19T12:11:31.277Z] Total : 10972.28 42.86 0.00 0.00 35161.93 104.19 31823.59 00:19:29.285 00:19:29.285 real 0m7.446s 00:19:29.285 user 0m13.657s 00:19:29.285 sys 0m0.372s 00:19:29.285 12:11:32 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:29.285 12:11:32 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:19:29.285 ************************************ 00:19:29.285 END TEST bdev_verify 00:19:29.285 ************************************ 00:19:29.285 12:11:32 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:29.285 12:11:32 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:19:29.285 12:11:32 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:29.285 12:11:32 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:29.285 ************************************ 00:19:29.285 START TEST bdev_verify_big_io 00:19:29.285 ************************************ 00:19:29.285 12:11:32 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:29.285 [2024-11-19 12:11:32.614859] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:19:29.285 [2024-11-19 12:11:32.614979] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90535 ] 00:19:29.546 [2024-11-19 12:11:32.793301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:29.806 [2024-11-19 12:11:32.931019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:29.806 [2024-11-19 12:11:32.931076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:30.378 Running I/O for 5 seconds... 00:19:32.260 633.00 IOPS, 39.56 MiB/s [2024-11-19T12:11:37.020Z] 761.00 IOPS, 47.56 MiB/s [2024-11-19T12:11:37.590Z] 760.67 IOPS, 47.54 MiB/s [2024-11-19T12:11:38.973Z] 761.50 IOPS, 47.59 MiB/s [2024-11-19T12:11:38.973Z] 774.00 IOPS, 48.38 MiB/s 00:19:35.596 Latency(us) 00:19:35.596 [2024-11-19T12:11:38.973Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:35.596 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:35.596 Verification LBA range: start 0x0 length 0x200 00:19:35.596 raid5f : 5.22 449.55 28.10 0.00 0.00 7084669.79 291.55 304041.25 00:19:35.596 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:35.596 Verification LBA range: start 0x200 length 0x200 00:19:35.596 raid5f : 5.24 339.05 21.19 0.00 0.00 9400896.16 206.59 388293.65 00:19:35.596 [2024-11-19T12:11:38.973Z] =================================================================================================================== 00:19:35.596 [2024-11-19T12:11:38.973Z] Total : 788.60 49.29 0.00 0.00 8082791.58 206.59 388293.65 00:19:36.979 00:19:36.979 real 0m7.689s 00:19:36.979 user 0m14.147s 00:19:36.979 sys 0m0.364s 00:19:36.979 12:11:40 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:36.979 12:11:40 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:19:36.979 ************************************ 00:19:36.979 END TEST bdev_verify_big_io 00:19:36.979 ************************************ 00:19:36.979 12:11:40 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:36.979 12:11:40 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:36.979 12:11:40 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:36.979 12:11:40 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:36.979 ************************************ 00:19:36.979 START TEST bdev_write_zeroes 00:19:36.979 ************************************ 00:19:36.979 12:11:40 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:37.239 [2024-11-19 12:11:40.387250] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:19:37.239 [2024-11-19 12:11:40.387369] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90634 ] 00:19:37.239 [2024-11-19 12:11:40.563961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:37.499 [2024-11-19 12:11:40.696815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:38.070 Running I/O for 1 seconds... 00:19:39.089 30111.00 IOPS, 117.62 MiB/s 00:19:39.089 Latency(us) 00:19:39.089 [2024-11-19T12:11:42.466Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:39.089 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:39.089 raid5f : 1.01 30074.91 117.48 0.00 0.00 4243.38 1352.22 5809.52 00:19:39.089 [2024-11-19T12:11:42.466Z] =================================================================================================================== 00:19:39.089 [2024-11-19T12:11:42.466Z] Total : 30074.91 117.48 0.00 0.00 4243.38 1352.22 5809.52 00:19:40.478 00:19:40.478 real 0m3.460s 00:19:40.478 user 0m2.964s 00:19:40.478 sys 0m0.370s 00:19:40.478 12:11:43 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:40.478 12:11:43 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:19:40.478 ************************************ 00:19:40.478 END TEST bdev_write_zeroes 00:19:40.478 ************************************ 00:19:40.478 12:11:43 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:40.478 12:11:43 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:40.478 12:11:43 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:40.478 12:11:43 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:40.478 ************************************ 00:19:40.478 START TEST bdev_json_nonenclosed 00:19:40.478 ************************************ 00:19:40.478 12:11:43 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:40.739 [2024-11-19 12:11:43.922313] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:19:40.739 [2024-11-19 12:11:43.922434] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90692 ] 00:19:40.739 [2024-11-19 12:11:44.108621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:40.999 [2024-11-19 12:11:44.248755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:40.999 [2024-11-19 12:11:44.248859] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:19:40.999 [2024-11-19 12:11:44.248889] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:40.999 [2024-11-19 12:11:44.248899] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:41.260 00:19:41.260 real 0m0.684s 00:19:41.260 user 0m0.419s 00:19:41.260 sys 0m0.160s 00:19:41.260 12:11:44 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:41.260 12:11:44 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:19:41.260 ************************************ 00:19:41.260 END TEST bdev_json_nonenclosed 00:19:41.260 ************************************ 00:19:41.260 12:11:44 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:41.260 12:11:44 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:41.260 12:11:44 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:41.260 12:11:44 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:41.260 ************************************ 00:19:41.260 START TEST bdev_json_nonarray 00:19:41.260 ************************************ 00:19:41.260 12:11:44 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:41.521 [2024-11-19 12:11:44.675073] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:19:41.521 [2024-11-19 12:11:44.675205] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90718 ] 00:19:41.521 [2024-11-19 12:11:44.848281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:41.781 [2024-11-19 12:11:44.980155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:41.782 [2024-11-19 12:11:44.980269] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:19:41.782 [2024-11-19 12:11:44.980288] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:41.782 [2024-11-19 12:11:44.980309] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:42.042 00:19:42.042 real 0m0.654s 00:19:42.042 user 0m0.399s 00:19:42.042 sys 0m0.150s 00:19:42.043 12:11:45 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:42.043 12:11:45 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:19:42.043 ************************************ 00:19:42.043 END TEST bdev_json_nonarray 00:19:42.043 ************************************ 00:19:42.043 12:11:45 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:19:42.043 12:11:45 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:19:42.043 12:11:45 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:19:42.043 12:11:45 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:19:42.043 12:11:45 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:19:42.043 12:11:45 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:19:42.043 12:11:45 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:42.043 12:11:45 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:19:42.043 12:11:45 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:19:42.043 12:11:45 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:19:42.043 12:11:45 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:19:42.043 00:19:42.043 real 0m48.263s 00:19:42.043 user 1m4.704s 00:19:42.043 sys 0m5.350s 00:19:42.043 12:11:45 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:42.043 12:11:45 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:42.043 ************************************ 00:19:42.043 END TEST blockdev_raid5f 00:19:42.043 ************************************ 00:19:42.043 12:11:45 -- spdk/autotest.sh@194 -- # uname -s 00:19:42.043 12:11:45 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:19:42.043 12:11:45 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:19:42.043 12:11:45 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:19:42.043 12:11:45 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:19:42.043 12:11:45 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:19:42.043 12:11:45 -- spdk/autotest.sh@260 -- # timing_exit lib 00:19:42.043 12:11:45 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:42.043 12:11:45 -- common/autotest_common.sh@10 -- # set +x 00:19:42.304 12:11:45 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:19:42.304 12:11:45 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:19:42.304 12:11:45 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:19:42.304 12:11:45 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:19:42.304 12:11:45 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:19:42.304 12:11:45 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:19:42.304 12:11:45 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:19:42.304 12:11:45 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:19:42.304 12:11:45 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:19:42.304 12:11:45 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:19:42.304 12:11:45 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:19:42.304 12:11:45 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:19:42.304 12:11:45 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:19:42.304 12:11:45 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:19:42.304 12:11:45 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:19:42.304 12:11:45 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:19:42.304 12:11:45 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:19:42.304 12:11:45 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:19:42.304 12:11:45 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:19:42.304 12:11:45 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:19:42.304 12:11:45 -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:42.304 12:11:45 -- common/autotest_common.sh@10 -- # set +x 00:19:42.304 12:11:45 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:19:42.304 12:11:45 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:19:42.304 12:11:45 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:19:42.304 12:11:45 -- common/autotest_common.sh@10 -- # set +x 00:19:44.844 INFO: APP EXITING 00:19:44.844 INFO: killing all VMs 00:19:44.844 INFO: killing vhost app 00:19:44.844 INFO: EXIT DONE 00:19:44.844 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:45.105 Waiting for block devices as requested 00:19:45.105 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:45.105 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:46.046 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:46.046 Cleaning 00:19:46.046 Removing: /var/run/dpdk/spdk0/config 00:19:46.046 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:19:46.046 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:19:46.046 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:19:46.046 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:19:46.046 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:19:46.046 Removing: /var/run/dpdk/spdk0/hugepage_info 00:19:46.046 Removing: /dev/shm/spdk_tgt_trace.pid56952 00:19:46.307 Removing: /var/run/dpdk/spdk0 00:19:46.307 Removing: /var/run/dpdk/spdk_pid56717 00:19:46.307 Removing: /var/run/dpdk/spdk_pid56952 00:19:46.307 Removing: /var/run/dpdk/spdk_pid57181 00:19:46.307 Removing: /var/run/dpdk/spdk_pid57285 00:19:46.307 Removing: /var/run/dpdk/spdk_pid57341 00:19:46.307 Removing: /var/run/dpdk/spdk_pid57469 00:19:46.307 Removing: /var/run/dpdk/spdk_pid57487 00:19:46.307 Removing: /var/run/dpdk/spdk_pid57703 00:19:46.307 Removing: /var/run/dpdk/spdk_pid57814 00:19:46.307 Removing: /var/run/dpdk/spdk_pid57927 00:19:46.307 Removing: /var/run/dpdk/spdk_pid58054 00:19:46.307 Removing: /var/run/dpdk/spdk_pid58168 00:19:46.307 Removing: /var/run/dpdk/spdk_pid58206 00:19:46.307 Removing: /var/run/dpdk/spdk_pid58244 00:19:46.307 Removing: /var/run/dpdk/spdk_pid58320 00:19:46.307 Removing: /var/run/dpdk/spdk_pid58437 00:19:46.307 Removing: /var/run/dpdk/spdk_pid58890 00:19:46.307 Removing: /var/run/dpdk/spdk_pid58965 00:19:46.307 Removing: /var/run/dpdk/spdk_pid59041 00:19:46.307 Removing: /var/run/dpdk/spdk_pid59061 00:19:46.307 Removing: /var/run/dpdk/spdk_pid59205 00:19:46.307 Removing: /var/run/dpdk/spdk_pid59223 00:19:46.307 Removing: /var/run/dpdk/spdk_pid59386 00:19:46.307 Removing: /var/run/dpdk/spdk_pid59408 00:19:46.307 Removing: /var/run/dpdk/spdk_pid59477 00:19:46.307 Removing: /var/run/dpdk/spdk_pid59501 00:19:46.307 Removing: /var/run/dpdk/spdk_pid59565 00:19:46.307 Removing: /var/run/dpdk/spdk_pid59588 00:19:46.307 Removing: /var/run/dpdk/spdk_pid59789 00:19:46.307 Removing: /var/run/dpdk/spdk_pid59831 00:19:46.307 Removing: /var/run/dpdk/spdk_pid59915 00:19:46.307 Removing: /var/run/dpdk/spdk_pid61268 00:19:46.307 Removing: /var/run/dpdk/spdk_pid61474 00:19:46.307 Removing: /var/run/dpdk/spdk_pid61614 00:19:46.307 Removing: /var/run/dpdk/spdk_pid62263 00:19:46.307 Removing: /var/run/dpdk/spdk_pid62469 00:19:46.307 Removing: /var/run/dpdk/spdk_pid62615 00:19:46.307 Removing: /var/run/dpdk/spdk_pid63258 00:19:46.307 Removing: /var/run/dpdk/spdk_pid63588 00:19:46.307 Removing: /var/run/dpdk/spdk_pid63728 00:19:46.307 Removing: /var/run/dpdk/spdk_pid65114 00:19:46.307 Removing: /var/run/dpdk/spdk_pid65367 00:19:46.307 Removing: /var/run/dpdk/spdk_pid65513 00:19:46.307 Removing: /var/run/dpdk/spdk_pid66898 00:19:46.307 Removing: /var/run/dpdk/spdk_pid67151 00:19:46.307 Removing: /var/run/dpdk/spdk_pid67291 00:19:46.307 Removing: /var/run/dpdk/spdk_pid68671 00:19:46.307 Removing: /var/run/dpdk/spdk_pid69117 00:19:46.307 Removing: /var/run/dpdk/spdk_pid69261 00:19:46.307 Removing: /var/run/dpdk/spdk_pid70741 00:19:46.307 Removing: /var/run/dpdk/spdk_pid71001 00:19:46.307 Removing: /var/run/dpdk/spdk_pid71147 00:19:46.568 Removing: /var/run/dpdk/spdk_pid72634 00:19:46.568 Removing: /var/run/dpdk/spdk_pid72899 00:19:46.568 Removing: /var/run/dpdk/spdk_pid73044 00:19:46.568 Removing: /var/run/dpdk/spdk_pid74532 00:19:46.568 Removing: /var/run/dpdk/spdk_pid75019 00:19:46.568 Removing: /var/run/dpdk/spdk_pid75168 00:19:46.568 Removing: /var/run/dpdk/spdk_pid75310 00:19:46.568 Removing: /var/run/dpdk/spdk_pid75734 00:19:46.568 Removing: /var/run/dpdk/spdk_pid76470 00:19:46.568 Removing: /var/run/dpdk/spdk_pid76846 00:19:46.568 Removing: /var/run/dpdk/spdk_pid77535 00:19:46.568 Removing: /var/run/dpdk/spdk_pid77970 00:19:46.568 Removing: /var/run/dpdk/spdk_pid78715 00:19:46.568 Removing: /var/run/dpdk/spdk_pid79119 00:19:46.568 Removing: /var/run/dpdk/spdk_pid81067 00:19:46.568 Removing: /var/run/dpdk/spdk_pid81512 00:19:46.568 Removing: /var/run/dpdk/spdk_pid81946 00:19:46.568 Removing: /var/run/dpdk/spdk_pid84033 00:19:46.568 Removing: /var/run/dpdk/spdk_pid84522 00:19:46.568 Removing: /var/run/dpdk/spdk_pid85046 00:19:46.568 Removing: /var/run/dpdk/spdk_pid86097 00:19:46.568 Removing: /var/run/dpdk/spdk_pid86422 00:19:46.568 Removing: /var/run/dpdk/spdk_pid87360 00:19:46.568 Removing: /var/run/dpdk/spdk_pid87683 00:19:46.568 Removing: /var/run/dpdk/spdk_pid88627 00:19:46.568 Removing: /var/run/dpdk/spdk_pid88950 00:19:46.568 Removing: /var/run/dpdk/spdk_pid89626 00:19:46.568 Removing: /var/run/dpdk/spdk_pid89901 00:19:46.568 Removing: /var/run/dpdk/spdk_pid89965 00:19:46.568 Removing: /var/run/dpdk/spdk_pid90007 00:19:46.568 Removing: /var/run/dpdk/spdk_pid90258 00:19:46.568 Removing: /var/run/dpdk/spdk_pid90436 00:19:46.568 Removing: /var/run/dpdk/spdk_pid90535 00:19:46.568 Removing: /var/run/dpdk/spdk_pid90634 00:19:46.568 Removing: /var/run/dpdk/spdk_pid90692 00:19:46.568 Removing: /var/run/dpdk/spdk_pid90718 00:19:46.568 Clean 00:19:46.568 12:11:49 -- common/autotest_common.sh@1453 -- # return 0 00:19:46.568 12:11:49 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:19:46.568 12:11:49 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:46.568 12:11:49 -- common/autotest_common.sh@10 -- # set +x 00:19:46.829 12:11:49 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:19:46.829 12:11:49 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:46.829 12:11:49 -- common/autotest_common.sh@10 -- # set +x 00:19:46.829 12:11:50 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:19:46.829 12:11:50 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:19:46.829 12:11:50 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:19:46.829 12:11:50 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:19:46.829 12:11:50 -- spdk/autotest.sh@398 -- # hostname 00:19:46.829 12:11:50 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:19:47.089 geninfo: WARNING: invalid characters removed from testname! 00:20:09.055 12:12:10 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:10.438 12:12:13 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:12.981 12:12:15 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:14.892 12:12:18 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:17.433 12:12:20 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:19.990 12:12:22 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:21.982 12:12:24 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:20:21.982 12:12:24 -- spdk/autorun.sh@1 -- $ timing_finish 00:20:21.982 12:12:24 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:20:21.982 12:12:24 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:20:21.982 12:12:24 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:20:21.982 12:12:24 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:21.982 + [[ -n 5435 ]] 00:20:21.982 + sudo kill 5435 00:20:21.993 [Pipeline] } 00:20:22.012 [Pipeline] // timeout 00:20:22.017 [Pipeline] } 00:20:22.031 [Pipeline] // stage 00:20:22.036 [Pipeline] } 00:20:22.050 [Pipeline] // catchError 00:20:22.059 [Pipeline] stage 00:20:22.061 [Pipeline] { (Stop VM) 00:20:22.073 [Pipeline] sh 00:20:22.359 + vagrant halt 00:20:24.904 ==> default: Halting domain... 00:20:33.052 [Pipeline] sh 00:20:33.336 + vagrant destroy -f 00:20:35.879 ==> default: Removing domain... 00:20:35.894 [Pipeline] sh 00:20:36.222 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:20:36.232 [Pipeline] } 00:20:36.246 [Pipeline] // stage 00:20:36.251 [Pipeline] } 00:20:36.264 [Pipeline] // dir 00:20:36.269 [Pipeline] } 00:20:36.282 [Pipeline] // wrap 00:20:36.286 [Pipeline] } 00:20:36.297 [Pipeline] // catchError 00:20:36.305 [Pipeline] stage 00:20:36.307 [Pipeline] { (Epilogue) 00:20:36.319 [Pipeline] sh 00:20:36.605 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:20:40.820 [Pipeline] catchError 00:20:40.822 [Pipeline] { 00:20:40.835 [Pipeline] sh 00:20:41.122 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:20:41.122 Artifacts sizes are good 00:20:41.132 [Pipeline] } 00:20:41.147 [Pipeline] // catchError 00:20:41.159 [Pipeline] archiveArtifacts 00:20:41.166 Archiving artifacts 00:20:41.300 [Pipeline] cleanWs 00:20:41.316 [WS-CLEANUP] Deleting project workspace... 00:20:41.317 [WS-CLEANUP] Deferred wipeout is used... 00:20:41.345 [WS-CLEANUP] done 00:20:41.347 [Pipeline] } 00:20:41.363 [Pipeline] // stage 00:20:41.368 [Pipeline] } 00:20:41.381 [Pipeline] // node 00:20:41.387 [Pipeline] End of Pipeline 00:20:41.425 Finished: SUCCESS